Nov 23 03:55:02 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 03:55:02 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:02 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 03:55:03 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 03:55:04 crc kubenswrapper[4751]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.385676 4751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394605 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394636 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394646 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394668 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394679 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394686 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394695 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394703 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394710 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394718 4751 feature_gate.go:330] unrecognized feature gate: Example Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394726 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394734 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394741 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394749 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394760 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394770 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394780 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394788 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394796 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394804 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394812 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394820 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394828 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394835 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394843 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394851 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394860 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394871 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394881 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394891 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394899 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394909 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394917 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394925 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394933 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394941 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394948 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394957 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394964 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394972 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394979 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394989 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.394999 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395008 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395016 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395026 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395035 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395043 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395051 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395059 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395066 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395076 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395084 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395092 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395101 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395109 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395121 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395131 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395140 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395148 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395156 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395163 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395171 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395178 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395186 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395194 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395203 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395212 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395221 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395228 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.395236 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396806 4751 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396830 4751 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396846 4751 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396859 4751 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396870 4751 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396880 4751 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396891 4751 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396901 4751 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396911 4751 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396919 4751 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396929 4751 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396938 4751 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396947 4751 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396956 4751 flags.go:64] FLAG: --cgroup-root="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396965 4751 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396974 4751 flags.go:64] FLAG: --client-ca-file="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396982 4751 flags.go:64] FLAG: --cloud-config="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.396991 4751 flags.go:64] FLAG: --cloud-provider="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397000 4751 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397010 4751 flags.go:64] FLAG: --cluster-domain="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397018 4751 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397027 4751 flags.go:64] FLAG: --config-dir="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397036 4751 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397045 4751 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397057 4751 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397066 4751 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397075 4751 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397084 4751 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397093 4751 flags.go:64] FLAG: --contention-profiling="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397699 4751 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397727 4751 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397739 4751 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397834 4751 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397853 4751 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397866 4751 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397878 4751 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397890 4751 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397916 4751 flags.go:64] FLAG: --enable-server="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397929 4751 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397948 4751 flags.go:64] FLAG: --event-burst="100" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397960 4751 flags.go:64] FLAG: --event-qps="50" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397973 4751 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397985 4751 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.397999 4751 flags.go:64] FLAG: --eviction-hard="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398015 4751 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398026 4751 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398047 4751 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398059 4751 flags.go:64] FLAG: --eviction-soft="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398071 4751 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398083 4751 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398095 4751 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398107 4751 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398120 4751 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398131 4751 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398153 4751 flags.go:64] FLAG: --feature-gates="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398168 4751 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398182 4751 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398194 4751 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398206 4751 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398218 4751 flags.go:64] FLAG: --healthz-port="10248" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398231 4751 flags.go:64] FLAG: --help="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398243 4751 flags.go:64] FLAG: --hostname-override="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398255 4751 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398279 4751 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398291 4751 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398303 4751 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398315 4751 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398327 4751 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398378 4751 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398392 4751 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398404 4751 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398427 4751 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398440 4751 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398451 4751 flags.go:64] FLAG: --kube-reserved="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398463 4751 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398476 4751 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398490 4751 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398501 4751 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398513 4751 flags.go:64] FLAG: --lock-file="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398524 4751 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398545 4751 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398558 4751 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398576 4751 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398588 4751 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398600 4751 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398611 4751 flags.go:64] FLAG: --logging-format="text" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398624 4751 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398637 4751 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398659 4751 flags.go:64] FLAG: --manifest-url="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398671 4751 flags.go:64] FLAG: --manifest-url-header="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398688 4751 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398700 4751 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398715 4751 flags.go:64] FLAG: --max-pods="110" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398727 4751 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398739 4751 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398751 4751 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398764 4751 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398786 4751 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398798 4751 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398809 4751 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398926 4751 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398941 4751 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398953 4751 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398965 4751 flags.go:64] FLAG: --pod-cidr="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398976 4751 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.398993 4751 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399058 4751 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399069 4751 flags.go:64] FLAG: --pods-per-core="0" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399079 4751 flags.go:64] FLAG: --port="10250" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399319 4751 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399334 4751 flags.go:64] FLAG: --provider-id="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399365 4751 flags.go:64] FLAG: --qos-reserved="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399375 4751 flags.go:64] FLAG: --read-only-port="10255" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399385 4751 flags.go:64] FLAG: --register-node="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399394 4751 flags.go:64] FLAG: --register-schedulable="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399403 4751 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399444 4751 flags.go:64] FLAG: --registry-burst="10" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399454 4751 flags.go:64] FLAG: --registry-qps="5" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399463 4751 flags.go:64] FLAG: --reserved-cpus="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399472 4751 flags.go:64] FLAG: --reserved-memory="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399483 4751 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399493 4751 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399502 4751 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399510 4751 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399520 4751 flags.go:64] FLAG: --runonce="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399530 4751 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399539 4751 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399549 4751 flags.go:64] FLAG: --seccomp-default="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399558 4751 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399567 4751 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399578 4751 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399587 4751 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399597 4751 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399606 4751 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399616 4751 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399625 4751 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399634 4751 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399644 4751 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399653 4751 flags.go:64] FLAG: --system-cgroups="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399662 4751 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399680 4751 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399689 4751 flags.go:64] FLAG: --tls-cert-file="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399697 4751 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399710 4751 flags.go:64] FLAG: --tls-min-version="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399718 4751 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399727 4751 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399736 4751 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399745 4751 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399754 4751 flags.go:64] FLAG: --v="2" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399766 4751 flags.go:64] FLAG: --version="false" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399778 4751 flags.go:64] FLAG: --vmodule="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399789 4751 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.399799 4751 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400019 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400030 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400039 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400048 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400056 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400066 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400075 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400084 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400092 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400100 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400108 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400117 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400125 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400133 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400142 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400151 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400158 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400166 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400174 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400182 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400190 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400198 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400206 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400214 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400222 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400230 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400238 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400248 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400259 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400268 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400277 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400285 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400294 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400302 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400311 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400319 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400326 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400334 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400369 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400378 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400386 4751 feature_gate.go:330] unrecognized feature gate: Example Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400394 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400402 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400410 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400418 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400426 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400433 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400443 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400452 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400461 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400469 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400477 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400484 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400492 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400500 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400511 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400521 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400529 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400539 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400574 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400583 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400591 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400599 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400607 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400614 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400623 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400635 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400645 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400654 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400662 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.400673 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.400698 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.409810 4751 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.409845 4751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409955 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409964 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409968 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409973 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409977 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409981 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409986 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409990 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409995 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.409999 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410004 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410009 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410015 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410020 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410024 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410036 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410040 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410045 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410049 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410054 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410058 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410065 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410073 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410078 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410083 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410088 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410093 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410098 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410104 4751 feature_gate.go:330] unrecognized feature gate: Example Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410109 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410114 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410118 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410124 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410130 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410137 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410140 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410145 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410149 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410153 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410159 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410164 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410168 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410172 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410177 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410181 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410185 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410190 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410195 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410199 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410203 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410207 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410211 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410215 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410218 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410222 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410226 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410230 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410234 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410237 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410241 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410245 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410249 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410253 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410257 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410261 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410265 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410268 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410272 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410276 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410280 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410284 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.410293 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410460 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410468 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410483 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410489 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410493 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410498 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410505 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410511 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410517 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410521 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410526 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410531 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410536 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410541 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410545 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410549 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410553 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410557 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410562 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410566 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410571 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410575 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410579 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410583 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410588 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410592 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410596 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410600 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410605 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410609 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410613 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410617 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410622 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410668 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410673 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410677 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410681 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410685 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410690 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410696 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410710 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410715 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410719 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410724 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410729 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410733 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410737 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410741 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410745 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410749 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410753 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410758 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410761 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410765 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410770 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410775 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410780 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410785 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410790 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410795 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410799 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410803 4751 feature_gate.go:330] unrecognized feature gate: Example Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410807 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410811 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410815 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410819 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410823 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410827 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410831 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410835 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.410840 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.410847 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.411053 4751 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.415438 4751 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.415517 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.417341 4751 server.go:997] "Starting client certificate rotation" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.417377 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.420122 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 16:24:22.935202928 +0000 UTC Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.420293 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.448936 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.454423 4751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.454510 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.478505 4751 log.go:25] "Validated CRI v1 runtime API" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.520153 4751 log.go:25] "Validated CRI v1 image API" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.524377 4751 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.532080 4751 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-03-50-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.532113 4751 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.551652 4751 manager.go:217] Machine: {Timestamp:2025-11-23 03:55:04.54951345 +0000 UTC m=+0.743184849 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c9a2725d-83da-40b9-a1a2-b2190ab58130 BootID:d131c98e-35d3-4a76-8a3a-23528d1e3523 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:81:2d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:81:2d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a2:44:6d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:11:9e:ed Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:20:06:ab Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:93:5f:29 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:16:c0:e2:49:50:8f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:9b:de:cd:44:77 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.551980 4751 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.552114 4751 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.553163 4751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.553402 4751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.553466 4751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.553693 4751 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.553723 4751 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.554520 4751 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.554558 4751 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.554722 4751 state_mem.go:36] "Initialized new in-memory state store" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.554826 4751 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.560246 4751 kubelet.go:418] "Attempting to sync node with API server" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.560273 4751 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.560299 4751 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.560313 4751 kubelet.go:324] "Adding apiserver pod source" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.560328 4751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.565329 4751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.566263 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.567560 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.567647 4751 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.567689 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.567607 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.567765 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569254 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569281 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569289 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569297 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569310 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569318 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569327 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569340 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569372 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569381 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569403 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.569411 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.570472 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.570870 4751 server.go:1280] "Started kubelet" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.571166 4751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.571291 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.571785 4751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.571878 4751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 03:55:04 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581392 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581448 4751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.581637 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581720 4751 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581739 4751 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581676 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 17:29:24.639147195 +0000 UTC Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.581784 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1165h34m20.057368569s for next certificate rotation Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.582175 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.582790 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.582888 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.583115 4751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.583277 4751 server.go:460] "Adding debug handlers to kubelet server" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.584692 4751 factory.go:55] Registering systemd factory Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.584724 4751 factory.go:221] Registration of the systemd container factory successfully Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.584332 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a8683c4e06eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 03:55:04.570851054 +0000 UTC m=+0.764522413,LastTimestamp:2025-11-23 03:55:04.570851054 +0000 UTC m=+0.764522413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.590745 4751 factory.go:153] Registering CRI-O factory Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.590891 4751 factory.go:221] Registration of the crio container factory successfully Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.591093 4751 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.591231 4751 factory.go:103] Registering Raw factory Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.591403 4751 manager.go:1196] Started watching for new ooms in manager Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.593577 4751 manager.go:319] Starting recovery of all containers Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595625 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595671 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595682 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595691 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595702 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595712 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595720 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595730 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595740 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595748 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595756 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595780 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595788 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595798 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595820 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595828 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595836 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595844 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595853 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595861 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595869 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595877 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595886 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595894 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595902 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595911 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595923 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595934 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595942 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595951 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595959 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595968 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.595976 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596009 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596018 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596028 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596038 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596049 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596059 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596068 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596078 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596087 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596097 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596106 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596115 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596124 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596133 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596142 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596150 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596159 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596169 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596177 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596191 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596200 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596210 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596221 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596230 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596238 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596246 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596273 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596283 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596292 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596301 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596308 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596316 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596326 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596334 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596345 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596365 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596374 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596412 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596423 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596431 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596439 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596447 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596455 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596463 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596472 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596481 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596489 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596498 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596507 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596516 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596524 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596533 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596543 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596552 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596560 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596569 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596577 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596585 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596593 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596603 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596612 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596621 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596629 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596638 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596647 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596656 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596669 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596680 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596689 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596698 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596706 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596718 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596728 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596738 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596748 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596757 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596766 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596776 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596785 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596794 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596803 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596811 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596819 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596828 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596836 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596845 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596853 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596862 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596870 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596880 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596889 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596899 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596907 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596916 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596924 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596932 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596941 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596950 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596960 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596969 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596978 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596987 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.596995 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597004 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597014 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597022 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597030 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597043 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597052 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597061 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597069 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597079 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597088 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597098 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597108 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597119 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597129 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597162 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597170 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597180 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597189 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597198 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597207 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597217 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597225 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597234 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597243 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.597254 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598766 4751 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598813 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598850 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598873 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598892 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598912 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598931 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598950 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598968 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.598985 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599003 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599022 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599041 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599060 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599078 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599096 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599114 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599132 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599151 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599170 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599188 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599207 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599241 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599259 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599276 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599295 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599315 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599332 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599472 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599511 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599531 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599550 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599567 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599585 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599604 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599623 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599641 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599663 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599682 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599699 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599717 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599733 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599752 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599768 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599785 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599804 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599822 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599840 4751 reconstruct.go:97] "Volume reconstruction finished" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.599852 4751 reconciler.go:26] "Reconciler: start to sync state" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.614468 4751 manager.go:324] Recovery completed Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.628087 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.630259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.630393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.630421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.631585 4751 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.631626 4751 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.631673 4751 state_mem.go:36] "Initialized new in-memory state store" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.638504 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.640565 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.642706 4751 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.642768 4751 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.642893 4751 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 03:55:04 crc kubenswrapper[4751]: W1123 03:55:04.643980 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.644032 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.657231 4751 policy_none.go:49] "None policy: Start" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.658309 4751 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.658341 4751 state_mem.go:35] "Initializing new in-memory state store" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.681987 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.706563 4751 manager.go:334] "Starting Device Plugin manager" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.706641 4751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.706655 4751 server.go:79] "Starting device plugin registration server" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.707426 4751 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.707466 4751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.707593 4751 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.707691 4751 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.707710 4751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.712838 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.743175 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.743394 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.744943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.745000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.745013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.745159 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.745650 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.745728 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.746953 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.747080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.747157 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.748158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.748186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.748249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750188 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750414 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.750451 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751771 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751859 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.751889 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752951 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.752975 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.754090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.754120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.754134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.783663 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.802896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.802951 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.802978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.802999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803019 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803103 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803144 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803204 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.803224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.807747 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.808644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.808712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.808732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.808762 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:04 crc kubenswrapper[4751]: E1123 03:55:04.809409 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905076 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905116 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905214 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905312 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905322 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905338 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905317 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905324 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905377 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905442 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905513 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905473 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905594 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905643 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905659 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905699 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905737 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905776 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:04 crc kubenswrapper[4751]: I1123 03:55:04.905851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.009529 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.011199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.011258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.011281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.011325 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.011969 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.081842 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.112008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.127181 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.134814 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d2dd973d27bb91368edde5a8fc864c74fe2c6e4c2e0ac72506df19fd27aca4b2 WatchSource:0}: Error finding container d2dd973d27bb91368edde5a8fc864c74fe2c6e4c2e0ac72506df19fd27aca4b2: Status 404 returned error can't find the container with id d2dd973d27bb91368edde5a8fc864c74fe2c6e4c2e0ac72506df19fd27aca4b2 Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.151145 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-57d76a0dc667af007de9a37776ba38146c8253820132f256d58f0b26c83d12f6 WatchSource:0}: Error finding container 57d76a0dc667af007de9a37776ba38146c8253820132f256d58f0b26c83d12f6: Status 404 returned error can't find the container with id 57d76a0dc667af007de9a37776ba38146c8253820132f256d58f0b26c83d12f6 Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.152439 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9fd025182285f65bee5365b4c92618678a5e39194682ef6113e5200985013873 WatchSource:0}: Error finding container 9fd025182285f65bee5365b4c92618678a5e39194682ef6113e5200985013873: Status 404 returned error can't find the container with id 9fd025182285f65bee5365b4c92618678a5e39194682ef6113e5200985013873 Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.155636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.159603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.182124 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d503b17b97e9b74e863232f71380152d2469fb7f241e6732462150cebfbb8ebe WatchSource:0}: Error finding container d503b17b97e9b74e863232f71380152d2469fb7f241e6732462150cebfbb8ebe: Status 404 returned error can't find the container with id d503b17b97e9b74e863232f71380152d2469fb7f241e6732462150cebfbb8ebe Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.185252 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.191479 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-41ddc2a7a583ed052b1faee4162260bbf1be7c57dacbcc16bde5e49d595ddf3f WatchSource:0}: Error finding container 41ddc2a7a583ed052b1faee4162260bbf1be7c57dacbcc16bde5e49d595ddf3f: Status 404 returned error can't find the container with id 41ddc2a7a583ed052b1faee4162260bbf1be7c57dacbcc16bde5e49d595ddf3f Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.400942 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.401094 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.412076 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.413917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.413969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.413984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.414016 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.414811 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.416530 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.416648 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:05 crc kubenswrapper[4751]: W1123 03:55:05.522133 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.522231 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.572608 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.647512 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9fd025182285f65bee5365b4c92618678a5e39194682ef6113e5200985013873"} Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.649147 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"57d76a0dc667af007de9a37776ba38146c8253820132f256d58f0b26c83d12f6"} Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.650464 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2dd973d27bb91368edde5a8fc864c74fe2c6e4c2e0ac72506df19fd27aca4b2"} Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.651846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41ddc2a7a583ed052b1faee4162260bbf1be7c57dacbcc16bde5e49d595ddf3f"} Nov 23 03:55:05 crc kubenswrapper[4751]: I1123 03:55:05.653304 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d503b17b97e9b74e863232f71380152d2469fb7f241e6732462150cebfbb8ebe"} Nov 23 03:55:05 crc kubenswrapper[4751]: E1123 03:55:05.986455 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Nov 23 03:55:06 crc kubenswrapper[4751]: W1123 03:55:06.206173 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:06 crc kubenswrapper[4751]: E1123 03:55:06.206299 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.215738 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.216911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.216965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.216993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.217060 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:06 crc kubenswrapper[4751]: E1123 03:55:06.217870 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.572778 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.575144 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 03:55:06 crc kubenswrapper[4751]: E1123 03:55:06.576479 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.659431 4751 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512" exitCode=0 Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.659539 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.659543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.660569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.660602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.660617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.663500 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.663543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.663561 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.663576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.663646 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.665682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.665714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.665729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.666101 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400" exitCode=0 Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.666197 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.666249 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.667371 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.667405 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.667417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.668044 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887" exitCode=0 Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.668166 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.668203 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.668966 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.669411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.669458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.669482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.669981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.670032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.670049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.670380 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c8f629d581b1ae8e83ea5d250d3762ba8e390dcd0eccc6c24ed4b5273c947cf4" exitCode=0 Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.670427 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c8f629d581b1ae8e83ea5d250d3762ba8e390dcd0eccc6c24ed4b5273c947cf4"} Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.670488 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.671669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.671709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:06 crc kubenswrapper[4751]: I1123 03:55:06.671725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: W1123 03:55:07.556419 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:07 crc kubenswrapper[4751]: E1123 03:55:07.556528 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.572893 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:07 crc kubenswrapper[4751]: E1123 03:55:07.587792 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.676794 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d" exitCode=0 Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.676963 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.676988 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.678220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.678254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.678266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.680728 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.680753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e1e4eb4fdcf7b10c30befd683b4ee0e03f56042b6733ecc805519b2e192ca61a"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.685182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.685214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.685223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.687494 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.687474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.687677 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.687706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.688368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.688406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.688419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.691869 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.691920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.691953 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.691976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772"} Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.691925 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.693682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.693720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.693738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.805745 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.818866 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.819787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.819816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.819825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.819843 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:07 crc kubenswrapper[4751]: E1123 03:55:07.820256 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Nov 23 03:55:07 crc kubenswrapper[4751]: I1123 03:55:07.952245 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:08 crc kubenswrapper[4751]: W1123 03:55:08.024534 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:08 crc kubenswrapper[4751]: E1123 03:55:08.024669 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:08 crc kubenswrapper[4751]: W1123 03:55:08.117262 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:08 crc kubenswrapper[4751]: E1123 03:55:08.117420 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:08 crc kubenswrapper[4751]: W1123 03:55:08.229605 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Nov 23 03:55:08 crc kubenswrapper[4751]: E1123 03:55:08.229701 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.699085 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0"} Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.699280 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.700480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.700529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.700546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.702605 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1" exitCode=0 Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.702746 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.702760 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.702840 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.702916 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.703206 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1"} Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704630 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.704907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.706180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.706233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:08 crc kubenswrapper[4751]: I1123 03:55:08.706253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b"} Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712728 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c"} Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712750 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec"} Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712793 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda"} Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712755 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712877 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.712937 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.714254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.714304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.714325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.715204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.715265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:09 crc kubenswrapper[4751]: I1123 03:55:09.715289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.680906 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.723405 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc"} Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.723580 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.724830 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.725030 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.725200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.725233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.725244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.726468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.726522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:10 crc kubenswrapper[4751]: I1123 03:55:10.726531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.021231 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.023174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.023237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.023256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.023293 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.085231 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.085470 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.086986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.087050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.087093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.095630 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.162222 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.726632 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.726735 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.728411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.728481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.728500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.729029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.729089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:11 crc kubenswrapper[4751]: I1123 03:55:11.729108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.729076 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.729900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.730068 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.730390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.730427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.730450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.731323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.731425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.731445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:12 crc kubenswrapper[4751]: I1123 03:55:12.953910 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.095228 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.095444 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.096834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.096877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.096888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.732051 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.733521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.733593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:13 crc kubenswrapper[4751]: I1123 03:55:13.733613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:14 crc kubenswrapper[4751]: E1123 03:55:14.712957 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.264482 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.264819 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.266732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.266776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.266800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.271850 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.737298 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.738699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.738773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:15 crc kubenswrapper[4751]: I1123 03:55:15.738791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.095240 4751 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.095400 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.499480 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.499837 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.501029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.501084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:16 crc kubenswrapper[4751]: I1123 03:55:16.501101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.574003 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.752903 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.755883 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0" exitCode=255 Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.755945 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0"} Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.756169 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.757488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.757595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.757624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:18 crc kubenswrapper[4751]: I1123 03:55:18.758470 4751 scope.go:117] "RemoveContainer" containerID="c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0" Nov 23 03:55:18 crc kubenswrapper[4751]: E1123 03:55:18.942950 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187a8683c4e06eee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 03:55:04.570851054 +0000 UTC m=+0.764522413,LastTimestamp:2025-11-23 03:55:04.570851054 +0000 UTC m=+0.764522413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.200565 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.200703 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.207393 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.207479 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.762556 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.765948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d"} Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.766440 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.767872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.768158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:19 crc kubenswrapper[4751]: I1123 03:55:19.768187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:20 crc kubenswrapper[4751]: I1123 03:55:20.725146 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:20 crc kubenswrapper[4751]: I1123 03:55:20.768960 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:20 crc kubenswrapper[4751]: I1123 03:55:20.770050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:20 crc kubenswrapper[4751]: I1123 03:55:20.770088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:20 crc kubenswrapper[4751]: I1123 03:55:20.770097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.738367 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.739388 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.741219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.741277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.741290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.746346 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.774950 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.775969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.776037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:22 crc kubenswrapper[4751]: I1123 03:55:22.776060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.201803 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.205207 4751 trace.go:236] Trace[325158791]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 03:55:12.847) (total time: 11357ms): Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[325158791]: ---"Objects listed" error: 11357ms (03:55:24.205) Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[325158791]: [11.357570147s] [11.357570147s] END Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.205266 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.205778 4751 trace.go:236] Trace[271626298]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 03:55:11.905) (total time: 12300ms): Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[271626298]: ---"Objects listed" error: 12300ms (03:55:24.205) Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[271626298]: [12.300396997s] [12.300396997s] END Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.205827 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.209669 4751 trace.go:236] Trace[257475709]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 03:55:13.090) (total time: 11119ms): Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[257475709]: ---"Objects listed" error: 11119ms (03:55:24.209) Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[257475709]: [11.119529944s] [11.119529944s] END Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.209720 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.209796 4751 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.210249 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.210839 4751 trace.go:236] Trace[1880626920]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 03:55:10.897) (total time: 13312ms): Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[1880626920]: ---"Objects listed" error: 13312ms (03:55:24.210) Nov 23 03:55:24 crc kubenswrapper[4751]: Trace[1880626920]: [13.312961996s] [13.312961996s] END Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.210885 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.215735 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.251114 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.257360 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.573279 4751 apiserver.go:52] "Watching apiserver" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.578215 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.578836 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.579892 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.579987 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.580089 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.580433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.580726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.580766 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.580821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.580846 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.580887 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.584287 4751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.584444 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.587703 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.587947 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.588192 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.588413 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.594369 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.594514 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.594544 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.594675 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.612724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.612822 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.612937 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613242 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613266 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613299 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613357 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613459 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613558 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613715 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613760 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613854 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613901 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.613957 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614022 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614009 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614056 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614111 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614184 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614228 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614496 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614674 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614763 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614813 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614943 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614978 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615049 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615151 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615184 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615284 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615317 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615356 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615464 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615498 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615537 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615570 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615600 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615634 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615736 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615770 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615801 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615836 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615967 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615998 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616032 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616235 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616270 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616302 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616576 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616721 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616756 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616861 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616895 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617535 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617617 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617689 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617757 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.614832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.618064 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.618316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.618313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619259 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619340 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619724 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619774 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617925 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619881 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619945 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619994 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620021 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620072 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620119 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620146 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620169 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620194 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620238 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620285 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620311 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620333 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620362 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620455 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620511 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620533 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620558 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620603 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620624 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620657 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620698 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620732 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620806 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620828 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620876 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620897 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620922 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620949 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620990 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621063 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621135 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621160 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621184 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621271 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621295 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621317 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621342 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621365 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621517 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621538 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621603 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621623 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621974 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622000 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622068 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622099 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622122 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622166 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622290 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622368 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622462 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622510 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622626 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622656 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622775 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622798 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622851 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622876 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622928 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622951 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622976 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623001 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623026 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623081 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623106 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623130 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623315 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623445 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623523 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623549 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623577 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623684 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623777 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623796 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623810 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623827 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623845 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623861 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623877 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623894 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623907 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623922 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623951 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623965 4751 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623979 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623992 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624007 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624021 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624038 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624051 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624066 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624079 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624093 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.619716 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615802 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615936 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616230 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616620 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616656 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.616922 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617723 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.617869 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620137 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620306 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620450 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.620534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.625808 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.625828 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621010 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.621765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622191 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622678 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.622853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623463 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.623804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624332 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.624878 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.625409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.615781 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.626355 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.626453 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.626581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.626728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627033 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627215 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627412 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627443 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627668 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.627755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.628181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.628504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.629146 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.630210 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.631965 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.631973 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632008 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632748 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632813 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632913 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.632979 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633266 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633583 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633616 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.633684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.634001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.634053 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.634116 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.634277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.634545 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.635290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.635790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.636003 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.636190 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.636588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.637081 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.637427 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.637533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.637709 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.637770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.638151 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.638477 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.639220 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.639432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.639689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.640041 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.640710 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.640805 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.641251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.641333 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.642059 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.642086 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.642097 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.642689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.644082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.636990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.645185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.645587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.645676 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.645968 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.646454 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.646907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.647113 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.647546 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.648063 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.648294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.648971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.649001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.649667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.649720 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.649966 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.650337 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.650879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.651206 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.651799 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.651982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.652022 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.652166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.652235 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.652702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.652708 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.653007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.653150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.653292 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.653511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.653850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.654046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.654162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.654762 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.654818 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.654969 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.655078 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.655371 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.655691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.655909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.656243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.656251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.659007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.659139 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.659462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.660134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.660462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.661112 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.661918 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.662234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.662703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.663064 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.663103 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.663183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.663237 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.663587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.663634 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.663690 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.663711 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.663786 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:25.163760396 +0000 UTC m=+21.357431995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.663978 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.664315 4751 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.664412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.664507 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.664572 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:25.164553847 +0000 UTC m=+21.358225216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.664680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.665111 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:25.164989728 +0000 UTC m=+21.358661177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.666645 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.667130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.667879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.668029 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.668281 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.668283 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.668611 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.668775 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.669207 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.669556 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:25.169540798 +0000 UTC m=+21.363212387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.671149 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.671181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.672192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.674911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.679607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.679765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.680363 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.682554 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.683695 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.683993 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.686624 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.687917 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.690540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.692962 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.693275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.698612 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.698876 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.699096 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.699220 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.699338 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.703411 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:25.203382785 +0000 UTC m=+21.397054154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.704012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.704039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.705162 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.709249 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.712802 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.716363 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.717441 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.719042 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.721091 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.721699 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.723601 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.724694 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.724849 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725251 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725275 4751 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725272 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725289 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725330 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725406 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725424 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725437 4751 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725479 4751 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725493 4751 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725506 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725522 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725564 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725578 4751 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725592 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725604 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725653 4751 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725692 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725738 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725757 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725773 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725789 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725925 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725951 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.725967 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726020 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726036 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726049 4751 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726094 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726111 4751 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726133 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726146 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726187 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726201 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726213 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726108 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726229 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726301 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726318 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726331 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726378 4751 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726395 4751 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726414 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726462 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726476 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726488 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726502 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726517 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726555 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726568 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726582 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726593 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726670 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726689 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726704 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726718 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726730 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726807 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726823 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726836 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726849 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726860 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726872 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726884 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726897 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726908 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726921 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726934 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726947 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726958 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726970 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726981 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726993 4751 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.726995 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727005 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727019 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727030 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727041 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727053 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727063 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727075 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727091 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727103 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727137 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727153 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727165 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727198 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727210 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727224 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727236 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727248 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727261 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727272 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727284 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727296 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727311 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727323 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727338 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727390 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727407 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727423 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727439 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727454 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727472 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727490 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727507 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727521 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727535 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727550 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727566 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727580 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727598 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727612 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727629 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727649 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727665 4751 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727682 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727698 4751 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727713 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727730 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727747 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727763 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727778 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727793 4751 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727862 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727880 4751 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727897 4751 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727914 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727929 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727944 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727960 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727977 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.727993 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728009 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728026 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728042 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728056 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728072 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728089 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728104 4751 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728122 4751 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728127 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728138 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728268 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728292 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728307 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728323 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728340 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729195 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729210 4751 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729222 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729234 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729250 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729264 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.728723 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729328 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729405 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729424 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729441 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729460 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729473 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729485 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729521 4751 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729532 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729543 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729554 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729569 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729581 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729591 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729604 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729618 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729631 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729646 4751 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729659 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.729677 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.730365 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.731904 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.732571 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.733987 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.734193 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.734652 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.735781 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.736335 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.737744 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.739198 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.740216 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.741970 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.743019 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.744541 4751 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.745420 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.746137 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.749243 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.750351 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.751036 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.754146 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.755195 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.757829 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.759491 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.761137 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.763070 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.763868 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.765484 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.766405 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.767794 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.769474 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.771949 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.772222 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.774602 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.776227 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.776866 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.778013 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.778645 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.779341 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.780724 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.781339 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.786793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: E1123 03:55:24.794149 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.804430 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.821375 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.831125 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.831161 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.833097 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.846971 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.861187 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.873735 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.886074 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.896973 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.901884 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.915215 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: W1123 03:55:24.918743 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a920b8e07980b3058178155cf48101cf9da11d19135f41fac62a9328d235d1e4 WatchSource:0}: Error finding container a920b8e07980b3058178155cf48101cf9da11d19135f41fac62a9328d235d1e4: Status 404 returned error can't find the container with id a920b8e07980b3058178155cf48101cf9da11d19135f41fac62a9328d235d1e4 Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.927829 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.928802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 03:55:24 crc kubenswrapper[4751]: W1123 03:55:24.945696 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-35250f96e62d6d1418ea7ba2a61f5d1176add5765aed9568173acbd38bb610fd WatchSource:0}: Error finding container 35250f96e62d6d1418ea7ba2a61f5d1176add5765aed9568173acbd38bb610fd: Status 404 returned error can't find the container with id 35250f96e62d6d1418ea7ba2a61f5d1176add5765aed9568173acbd38bb610fd Nov 23 03:55:24 crc kubenswrapper[4751]: I1123 03:55:24.990238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 03:55:25 crc kubenswrapper[4751]: W1123 03:55:25.013160 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-7b801d61c5a489aef2bbf55dce79507a1736795860ec9a64f386a5eb5fb49a0b WatchSource:0}: Error finding container 7b801d61c5a489aef2bbf55dce79507a1736795860ec9a64f386a5eb5fb49a0b: Status 404 returned error can't find the container with id 7b801d61c5a489aef2bbf55dce79507a1736795860ec9a64f386a5eb5fb49a0b Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.235474 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.235564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.235601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.235624 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.235687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235783 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235832 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:26.235819461 +0000 UTC m=+22.429490820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235898 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:26.235862502 +0000 UTC m=+22.429533901 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235911 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235967 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235990 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236011 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.235905 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236103 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236128 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236049 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:26.236035186 +0000 UTC m=+22.429706585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236214 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:26.236193481 +0000 UTC m=+22.429864880 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.236238 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:26.236226151 +0000 UTC m=+22.429897540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.793844 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.793902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.793919 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7b801d61c5a489aef2bbf55dce79507a1736795860ec9a64f386a5eb5fb49a0b"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.795904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"35250f96e62d6d1418ea7ba2a61f5d1176add5765aed9568173acbd38bb610fd"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.798801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.798881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a920b8e07980b3058178155cf48101cf9da11d19135f41fac62a9328d235d1e4"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.801492 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.802499 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.804716 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d" exitCode=255 Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.804807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d"} Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.804894 4751 scope.go:117] "RemoveContainer" containerID="c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.811298 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.819280 4751 scope.go:117] "RemoveContainer" containerID="a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d" Nov 23 03:55:25 crc kubenswrapper[4751]: E1123 03:55:25.819596 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.820532 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.831163 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.847714 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.870996 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.891021 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.908411 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.926295 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.946899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.970717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:18Z\\\",\\\"message\\\":\\\"W1123 03:55:07.854738 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 03:55:07.855152 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763870107 cert, and key in /tmp/serving-cert-3295046935/serving-signer.crt, /tmp/serving-cert-3295046935/serving-signer.key\\\\nI1123 03:55:08.176630 1 observer_polling.go:159] Starting file observer\\\\nW1123 03:55:08.178690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 03:55:08.179322 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:08.181364 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3295046935/tls.crt::/tmp/serving-cert-3295046935/tls.key\\\\\\\"\\\\nF1123 03:55:18.503990 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:25 crc kubenswrapper[4751]: I1123 03:55:25.996832 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.025349 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.045841 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.064046 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.079280 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.099080 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.247203 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.247334 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.247408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247449 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:28.247407804 +0000 UTC m=+24.441079193 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.247518 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.247604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247606 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247658 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247664 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247736 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:28.247714462 +0000 UTC m=+24.441385831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247820 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:28.247790834 +0000 UTC m=+24.441462213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247836 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247687 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247936 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247985 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:28.247975499 +0000 UTC m=+24.441646868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.247857 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.248020 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.248115 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:28.248106102 +0000 UTC m=+24.441777471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.546178 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.571760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.572936 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.573025 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.593708 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:18Z\\\",\\\"message\\\":\\\"W1123 03:55:07.854738 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 03:55:07.855152 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763870107 cert, and key in /tmp/serving-cert-3295046935/serving-signer.crt, /tmp/serving-cert-3295046935/serving-signer.key\\\\nI1123 03:55:08.176630 1 observer_polling.go:159] Starting file observer\\\\nW1123 03:55:08.178690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 03:55:08.179322 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:08.181364 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3295046935/tls.crt::/tmp/serving-cert-3295046935/tls.key\\\\\\\"\\\\nF1123 03:55:18.503990 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.611904 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.635789 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.643432 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.643512 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.643449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.643592 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.643737 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.643800 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.652183 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.669296 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.684874 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.702102 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.736220 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c992bd52a18931e479af89c7a0633607918c574f97a7d66185fcee5ea8d198b0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:18Z\\\",\\\"message\\\":\\\"W1123 03:55:07.854738 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 03:55:07.855152 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763870107 cert, and key in /tmp/serving-cert-3295046935/serving-signer.crt, /tmp/serving-cert-3295046935/serving-signer.key\\\\nI1123 03:55:08.176630 1 observer_polling.go:159] Starting file observer\\\\nW1123 03:55:08.178690 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 03:55:08.179322 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:08.181364 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3295046935/tls.crt::/tmp/serving-cert-3295046935/tls.key\\\\\\\"\\\\nF1123 03:55:18.503990 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.763316 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.789427 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.809853 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.813162 4751 scope.go:117] "RemoveContainer" containerID="a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d" Nov 23 03:55:26 crc kubenswrapper[4751]: E1123 03:55:26.813396 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.817053 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.834047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.854888 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.869089 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.887490 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.904849 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.921100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.935232 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.949211 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.969335 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:26 crc kubenswrapper[4751]: I1123 03:55:26.984234 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.002372 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:26Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.016608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:27Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.033391 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:27Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.060715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:27Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.874013 4751 csr.go:261] certificate signing request csr-j4nzm is approved, waiting to be issued Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.881442 4751 csr.go:257] certificate signing request csr-j4nzm is issued Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.964783 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vwbwq"] Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.965064 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qft9h"] Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.965313 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.965668 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.967009 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.967326 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.967410 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.968207 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.968339 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.968645 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.970038 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.980981 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:27Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:27 crc kubenswrapper[4751]: I1123 03:55:27.995516 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:27Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.005283 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.017211 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.028017 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.041871 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.056706 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.065442 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnncv\" (UniqueName: \"kubernetes.io/projected/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-kube-api-access-rnncv\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.065504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-serviceca\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.065551 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-hosts-file\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.065582 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-host\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.065627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkrh\" (UniqueName: \"kubernetes.io/projected/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-kube-api-access-xpkrh\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.078113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.109456 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.129412 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.141910 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.151965 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166549 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpkrh\" (UniqueName: \"kubernetes.io/projected/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-kube-api-access-xpkrh\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnncv\" (UniqueName: \"kubernetes.io/projected/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-kube-api-access-rnncv\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-hosts-file\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-serviceca\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-host\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166714 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-host\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.166764 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-hosts-file\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.167495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-serviceca\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.171074 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.183254 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.186258 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpkrh\" (UniqueName: \"kubernetes.io/projected/6c4656b0-22d1-4a81-9d5c-d48b0521e0be-kube-api-access-xpkrh\") pod \"node-resolver-vwbwq\" (UID: \"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\") " pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.186769 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnncv\" (UniqueName: \"kubernetes.io/projected/a9d40550-4dd0-4a06-8fb7-0e8ad74822c9-kube-api-access-rnncv\") pod \"node-ca-qft9h\" (UID: \"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\") " pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.198480 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.208901 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.219157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.229883 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.243800 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.263534 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.266942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.266998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.267020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.267039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.267064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267179 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267195 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267206 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267207 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267176 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:32.267143519 +0000 UTC m=+28.460814878 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267238 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267292 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267258 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:32.267244062 +0000 UTC m=+28.460915411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267329 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267348 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267386 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:32.267337795 +0000 UTC m=+28.461009254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267409 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:32.267400806 +0000 UTC m=+28.461072305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.267424 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:32.267416777 +0000 UTC m=+28.461088286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.277059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.279149 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qft9h" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.288772 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vwbwq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.289845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: W1123 03:55:28.290855 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d40550_4dd0_4a06_8fb7_0e8ad74822c9.slice/crio-8e8c766e015b8624630d228caf8a81d2225ed5ad63e2a403802717e2dee2b4b1 WatchSource:0}: Error finding container 8e8c766e015b8624630d228caf8a81d2225ed5ad63e2a403802717e2dee2b4b1: Status 404 returned error can't find the container with id 8e8c766e015b8624630d228caf8a81d2225ed5ad63e2a403802717e2dee2b4b1 Nov 23 03:55:28 crc kubenswrapper[4751]: W1123 03:55:28.307180 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4656b0_22d1_4a81_9d5c_d48b0521e0be.slice/crio-ef756f86cba7c7408b360cfa6b4897564e63fe4af6711ecf564eb9cb22e795f5 WatchSource:0}: Error finding container ef756f86cba7c7408b360cfa6b4897564e63fe4af6711ecf564eb9cb22e795f5: Status 404 returned error can't find the container with id ef756f86cba7c7408b360cfa6b4897564e63fe4af6711ecf564eb9cb22e795f5 Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.675655 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.675701 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.675761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.675844 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.675911 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:28 crc kubenswrapper[4751]: E1123 03:55:28.675951 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.729092 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qxhsd"] Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.730076 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pfb45"] Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.730231 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.730553 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.732193 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nfjcv"] Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.732907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.732999 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4dq7q"] Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.733414 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.735922 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.735942 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736031 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736063 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736368 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736379 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736460 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736770 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736772 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736812 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.736915 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.737172 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.737278 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.737725 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.737921 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.738146 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.738175 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.744761 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.760244 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-os-release\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776343 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-cnibin\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776373 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776389 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06e1c062-27d7-4432-9f0e-db4e98f65b0e-rootfs\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776602 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776626 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776652 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06e1c062-27d7-4432-9f0e-db4e98f65b0e-proxy-tls\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06e1c062-27d7-4432-9f0e-db4e98f65b0e-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pffx5\" (UniqueName: \"kubernetes.io/projected/06e1c062-27d7-4432-9f0e-db4e98f65b0e-kube-api-access-pffx5\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776873 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-system-cni-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776940 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.776974 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbnh\" (UniqueName: \"kubernetes.io/projected/8b70755e-47c0-464f-bcd9-a509700373ec-kube-api-access-hzbnh\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.779290 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.805909 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.817027 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.819126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901"} Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.820273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vwbwq" event={"ID":"6c4656b0-22d1-4a81-9d5c-d48b0521e0be","Type":"ContainerStarted","Data":"a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19"} Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.820294 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vwbwq" event={"ID":"6c4656b0-22d1-4a81-9d5c-d48b0521e0be","Type":"ContainerStarted","Data":"ef756f86cba7c7408b360cfa6b4897564e63fe4af6711ecf564eb9cb22e795f5"} Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.821573 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qft9h" event={"ID":"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9","Type":"ContainerStarted","Data":"af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544"} Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.821611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qft9h" event={"ID":"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9","Type":"ContainerStarted","Data":"8e8c766e015b8624630d228caf8a81d2225ed5ad63e2a403802717e2dee2b4b1"} Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.833380 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.849692 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.868148 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877548 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-multus-daemon-config\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877601 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877619 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-socket-dir-parent\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877636 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877687 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rshhs\" (UniqueName: \"kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-kubelet\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877814 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-k8s-cni-cncf-io\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877829 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-multus-certs\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877875 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.877946 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878011 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878086 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06e1c062-27d7-4432-9f0e-db4e98f65b0e-proxy-tls\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06e1c062-27d7-4432-9f0e-db4e98f65b0e-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878120 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pffx5\" (UniqueName: \"kubernetes.io/projected/06e1c062-27d7-4432-9f0e-db4e98f65b0e-kube-api-access-pffx5\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-cnibin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-system-cni-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-system-cni-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878760 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-hostroot\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878774 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-etc-kubernetes\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfjc\" (UniqueName: \"kubernetes.io/projected/ee318377-acb2-4f75-9414-02313f3824e0-kube-api-access-9cfjc\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/06e1c062-27d7-4432-9f0e-db4e98f65b0e-mcd-auth-proxy-config\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-binary-copy\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.878866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-system-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879521 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879537 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-os-release\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-multus\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879573 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzbnh\" (UniqueName: \"kubernetes.io/projected/8b70755e-47c0-464f-bcd9-a509700373ec-kube-api-access-hzbnh\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879575 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-os-release\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879783 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-cnibin\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879802 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-bin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879841 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-os-release\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879844 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-netns\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-cnibin\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879900 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06e1c062-27d7-4432-9f0e-db4e98f65b0e-rootfs\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879957 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879975 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.879991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-conf-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880069 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880116 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-cni-binary-copy\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880174 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/06e1c062-27d7-4432-9f0e-db4e98f65b0e-rootfs\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880195 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880274 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8b70755e-47c0-464f-bcd9-a509700373ec-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880467 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/06e1c062-27d7-4432-9f0e-db4e98f65b0e-proxy-tls\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.880814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b70755e-47c0-464f-bcd9-a509700373ec-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.881141 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.882245 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-23 03:50:27 +0000 UTC, rotation deadline is 2026-09-29 00:09:11.495770456 +0000 UTC Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.882281 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7436h13m42.613491172s for next certificate rotation Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.892425 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.893521 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pffx5\" (UniqueName: \"kubernetes.io/projected/06e1c062-27d7-4432-9f0e-db4e98f65b0e-kube-api-access-pffx5\") pod \"machine-config-daemon-pfb45\" (UID: \"06e1c062-27d7-4432-9f0e-db4e98f65b0e\") " pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.894563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzbnh\" (UniqueName: \"kubernetes.io/projected/8b70755e-47c0-464f-bcd9-a509700373ec-kube-api-access-hzbnh\") pod \"multus-additional-cni-plugins-qxhsd\" (UID: \"8b70755e-47c0-464f-bcd9-a509700373ec\") " pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.905927 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.917806 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.926951 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.941295 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.951684 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.961899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.977180 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980644 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-cnibin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-system-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-hostroot\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-etc-kubernetes\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfjc\" (UniqueName: \"kubernetes.io/projected/ee318377-acb2-4f75-9414-02313f3824e0-kube-api-access-9cfjc\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-multus\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980774 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980789 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980803 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-multus\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980810 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-cnibin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-hostroot\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-etc-kubernetes\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-system-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.980969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-os-release\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-bin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981052 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-os-release\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-cni-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-cni-bin\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981104 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-netns\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-conf-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-netns\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981279 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-cni-binary-copy\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981306 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981315 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-conf-dir\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981332 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-socket-dir-parent\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981381 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-multus-daemon-config\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981402 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-kubelet\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981427 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981447 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-multus-socket-dir-parent\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rshhs\" (UniqueName: \"kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981494 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981518 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-k8s-cni-cncf-io\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-multus-certs\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-var-lib-kubelet\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981702 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981956 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.981965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-cni-binary-copy\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-multus-certs\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ee318377-acb2-4f75-9414-02313f3824e0-host-run-k8s-cni-cncf-io\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982160 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ee318377-acb2-4f75-9414-02313f3824e0-multus-daemon-config\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.982519 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.983760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.995178 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:28Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.995394 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rshhs\" (UniqueName: \"kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs\") pod \"ovnkube-node-nfjcv\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:28 crc kubenswrapper[4751]: I1123 03:55:28.995436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfjc\" (UniqueName: \"kubernetes.io/projected/ee318377-acb2-4f75-9414-02313f3824e0-kube-api-access-9cfjc\") pod \"multus-4dq7q\" (UID: \"ee318377-acb2-4f75-9414-02313f3824e0\") " pod="openshift-multus/multus-4dq7q" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.004711 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.014650 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.024982 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.033432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.043837 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.045705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.053780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" Nov 23 03:55:29 crc kubenswrapper[4751]: W1123 03:55:29.055105 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e1c062_27d7_4432_9f0e_db4e98f65b0e.slice/crio-0e5959c4eb4fc29d60ebf10c933e61698902be767726bc84a7ff780b1275f3e0 WatchSource:0}: Error finding container 0e5959c4eb4fc29d60ebf10c933e61698902be767726bc84a7ff780b1275f3e0: Status 404 returned error can't find the container with id 0e5959c4eb4fc29d60ebf10c933e61698902be767726bc84a7ff780b1275f3e0 Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.064184 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.066543 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.073407 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4dq7q" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.082584 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.115977 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.146606 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.170038 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.826377 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" exitCode=0 Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.826451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.826727 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"5ade2dc3f463d1c6a3afbd8aed48b29630afab3ac1f2e4daa9e70a27c9263119"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.828950 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86" exitCode=0 Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.829007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.829034 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerStarted","Data":"a51fb0b27a4d192091f9bad2e412b4783cecdfabb4afd883edbfd8bd92d47343"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.831312 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerStarted","Data":"adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.831338 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerStarted","Data":"08a67335642fb35294091768caff3fd918835598366eb5fcbf5f846b8e37ca53"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.834565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.834631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.834665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"0e5959c4eb4fc29d60ebf10c933e61698902be767726bc84a7ff780b1275f3e0"} Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.842788 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.854826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.871215 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.884408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.909852 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.924503 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.934946 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.945951 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.957934 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.974115 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:29 crc kubenswrapper[4751]: I1123 03:55:29.986127 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.001156 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:29Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.015328 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.026504 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.047062 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.060231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.070930 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.085564 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.100645 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.116762 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.133445 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.146884 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.157082 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.169716 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.180896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.194341 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.211436 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.230598 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.250899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.267255 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.611193 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.613585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.613622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.613634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.613701 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.625081 4751 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.625520 4751 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.626822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.626866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.626883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.626907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.626924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.643235 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.643256 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.643335 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.643239 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.643536 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.643709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.646642 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.650172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.650203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.650211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.650240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.650252 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.667610 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.670470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.670612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.670701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.670786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.670882 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.686118 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.689639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.689897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.690019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.690118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.690200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.705843 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.708493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.708517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.708525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.708538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.708551 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.724652 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: E1123 03:55:30.724765 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.726321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.726383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.726396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.726415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.726429 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.829172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.829444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.829537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.829631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.829712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.843050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.843117 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.843139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.843157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.845677 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398" exitCode=0 Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.845731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.864494 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.878311 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.898752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.914809 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.933710 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.937222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.937277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.937305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.937331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.937373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:30Z","lastTransitionTime":"2025-11-23T03:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.961478 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.973163 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:30 crc kubenswrapper[4751]: I1123 03:55:30.986200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:30Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.033224 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.046963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.047028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.047051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.047080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.047104 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.062833 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.088307 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.101712 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.115059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.127442 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.145382 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.150049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.150150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.150210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.150268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.150329 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.253127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.253523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.253536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.253549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.253558 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.355795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.355827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.355837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.355851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.355863 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.459295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.459394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.459412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.459442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.459462 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.561696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.561730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.561739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.561753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.561763 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.664165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.664206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.664214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.664231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.664240 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.767076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.767142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.767161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.767184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.767201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.853979 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822" exitCode=0 Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.854115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.860797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.860852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.869577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.869621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.869631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.869648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.869658 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.874960 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.895183 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.914745 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.930826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.956216 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.973589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.973633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.973647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.973664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.973677 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:31Z","lastTransitionTime":"2025-11-23T03:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.979260 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:31 crc kubenswrapper[4751]: I1123 03:55:31.995938 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:31Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.013294 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.025423 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.042393 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.053677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.076285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.076318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.076340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.076364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.076373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.079068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.095296 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.112890 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.126267 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.178910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.178962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.178972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.178989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.179001 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.281818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.281864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.281876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.281894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.281907 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.328762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.328916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.329000 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.329339 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:40.328986208 +0000 UTC m=+36.522657607 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.329825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.329934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.330016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330052 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330077 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330114 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330139 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:40.330111667 +0000 UTC m=+36.523783026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:40.330163049 +0000 UTC m=+36.523834408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330204 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330216 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330231 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330250 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330266 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:40.330252141 +0000 UTC m=+36.523923500 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.330390 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:40.330324513 +0000 UTC m=+36.523995922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.384711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.384753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.384764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.384782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.384792 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.488061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.488101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.488111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.488125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.488135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.591540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.591623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.591648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.591681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.591705 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.643483 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.643563 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.643484 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.643655 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.643751 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:32 crc kubenswrapper[4751]: E1123 03:55:32.643864 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.695259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.695473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.695565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.695640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.695706 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.798523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.798584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.798600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.798626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.798644 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.869082 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c" exitCode=0 Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.869123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.892461 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.902296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.902384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.902403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.902427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.902445 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:32Z","lastTransitionTime":"2025-11-23T03:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.914040 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.935111 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.956449 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:32 crc kubenswrapper[4751]: I1123 03:55:32.982013 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.005591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.005627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.005638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.005656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.005668 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.007881 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.042906 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.065636 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.088829 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.108956 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.127061 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.154278 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.172387 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.192967 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.211399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.211439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.211453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.211470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.211483 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.214755 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.314696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.314762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.314780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.314806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.314824 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.325988 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.327090 4751 scope.go:117] "RemoveContainer" containerID="a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d" Nov 23 03:55:33 crc kubenswrapper[4751]: E1123 03:55:33.331923 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.418230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.418287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.418304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.418327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.418373 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.521669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.521738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.521755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.521782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.521801 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.624855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.624915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.624932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.624955 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.624972 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.727965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.728336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.728523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.728696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.728819 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.832277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.832404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.832423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.832446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.832511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.878534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.882530 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf" exitCode=0 Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.882601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.906569 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.929509 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.935480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.935528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.935545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.935568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.935585 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:33Z","lastTransitionTime":"2025-11-23T03:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:33 crc kubenswrapper[4751]: I1123 03:55:33.945456 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.008248 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:33Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.037841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.037888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.037902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.037926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.037940 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.041148 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.054068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.068278 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.078473 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.086502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.097842 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.112532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.123824 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.133610 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.139696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.139738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.139752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.139772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.139785 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.144272 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.153502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.242718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.242761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.242772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.242789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.242799 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.346749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.346815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.346834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.346859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.346876 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.416779 4751 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.452428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.452498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.452518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.452545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.452563 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.555971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.556037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.556056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.556084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.556102 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.643055 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.643179 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:34 crc kubenswrapper[4751]: E1123 03:55:34.643245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.643260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:34 crc kubenswrapper[4751]: E1123 03:55:34.643411 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:34 crc kubenswrapper[4751]: E1123 03:55:34.643525 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.658766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.658823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.658841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.658864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.658882 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.660815 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.681442 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.701634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.721999 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.742899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.761394 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.762281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.762381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.762408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.762439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.762462 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.787423 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.809644 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.830802 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.865872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.866455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.866479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.866508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.866529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.868887 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.886518 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.891269 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b70755e-47c0-464f-bcd9-a509700373ec" containerID="d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4" exitCode=0 Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.891340 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerDied","Data":"d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.905906 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.926095 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.945596 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.971917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.971977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.971996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.972020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.972044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:34Z","lastTransitionTime":"2025-11-23T03:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.977337 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:34 crc kubenswrapper[4751]: I1123 03:55:34.994718 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.022047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.043146 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.057068 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.070758 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.073888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.073938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.073948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.073962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.073972 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.084173 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.098916 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.110779 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.125568 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.142230 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.165006 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.176979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.177018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.177030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.177044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.177053 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.188051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.202781 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.222472 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.238065 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.279096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.279133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.279142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.279157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.279168 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.381788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.381845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.381863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.381886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.381902 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.484983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.485040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.485059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.485081 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.485099 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.588134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.588191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.588209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.588233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.588251 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.690731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.690788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.690805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.690828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.690843 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.794289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.794350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.794403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.794431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.794452 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.896634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.896696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.896714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.896739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.896758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:35Z","lastTransitionTime":"2025-11-23T03:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.901125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" event={"ID":"8b70755e-47c0-464f-bcd9-a509700373ec","Type":"ContainerStarted","Data":"bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.908462 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264"} Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.908897 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.908963 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.923132 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.938242 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.957949 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.969052 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.969147 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:35 crc kubenswrapper[4751]: I1123 03:55:35.985113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:35.999945 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:35Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.000078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.000113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.000129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.000151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.000165 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.018974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.037290 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.052377 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.074304 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.093062 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.103397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.103432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.103444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.103462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.103474 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.107235 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.130833 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.164169 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.179859 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.201790 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.206060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.206151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.206172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.206197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.206216 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.220799 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.236018 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.262513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.297534 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.309104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.309169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.309186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.309214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.309232 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.319102 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.340150 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.361732 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.379438 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.400670 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.412134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.412194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.412214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.412238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.412256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.419273 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.433319 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.448646 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.462728 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.473629 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.486627 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:36Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.514961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.515018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.515035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.515059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.515078 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.617509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.617575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.617599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.617628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.617650 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.643952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.644085 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.643963 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:36 crc kubenswrapper[4751]: E1123 03:55:36.644154 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:36 crc kubenswrapper[4751]: E1123 03:55:36.644291 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:36 crc kubenswrapper[4751]: E1123 03:55:36.644536 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.721253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.721322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.721378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.721752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.721794 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.824714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.824777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.824799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.824824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.824842 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.912324 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.926669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.926737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.926756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.926779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:36 crc kubenswrapper[4751]: I1123 03:55:36.926796 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:36Z","lastTransitionTime":"2025-11-23T03:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.029719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.029801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.029828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.029860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.029883 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.136206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.136259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.136280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.136307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.136383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.240297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.240382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.240398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.240419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.240434 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.343429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.343487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.343504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.343529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.343546 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.446569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.446620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.446637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.446660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.446676 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.549185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.549216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.549227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.549242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.549254 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.652338 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.652377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.652390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.652402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.652412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.754817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.754862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.754879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.754902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.754918 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.856875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.856910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.856919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.856932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.856941 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.915261 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.958526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.958578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.958596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.958612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:37 crc kubenswrapper[4751]: I1123 03:55:37.958627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:37Z","lastTransitionTime":"2025-11-23T03:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.060931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.060989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.061006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.061028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.061043 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.164102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.164150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.164160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.164178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.164189 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.267709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.267788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.267811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.267841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.267862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.369893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.369961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.369978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.370008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.370025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.472284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.472384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.472403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.472427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.472444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.575865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.575912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.575923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.575941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.575952 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.643633 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.643676 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.643633 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:38 crc kubenswrapper[4751]: E1123 03:55:38.643820 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:38 crc kubenswrapper[4751]: E1123 03:55:38.644056 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:38 crc kubenswrapper[4751]: E1123 03:55:38.644220 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.678914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.678984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.679008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.679038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.679062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.781668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.781726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.781742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.781765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.781782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.832232 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.885452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.885511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.885538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.885560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.885574 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.922498 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/0.log" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.926924 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264" exitCode=1 Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.927012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264"} Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.928636 4751 scope.go:117] "RemoveContainer" containerID="22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.949563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:38Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.969931 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:38Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.986793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:38Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.989724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.989834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.989857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.989919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:38 crc kubenswrapper[4751]: I1123 03:55:38.989944 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:38Z","lastTransitionTime":"2025-11-23T03:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.013424 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.031822 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.054406 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.080561 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.093431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.093503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.093521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.093546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.093564 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.128156 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.152023 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.171306 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.192475 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.196555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.196624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.196649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.196677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.196699 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.231621 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:38Z\\\",\\\"message\\\":\\\"val\\\\nI1123 03:55:38.463748 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:38.463825 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 03:55:38.463982 6059 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:38.464843 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:55:38.464860 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:55:38.464904 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:55:38.464917 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 03:55:38.464923 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 03:55:38.464961 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:55:38.464970 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:38.464975 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 03:55:38.465009 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 03:55:38.465031 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:38.465064 6059 factory.go:656] Stopping watch factory\\\\nI1123 03:55:38.465089 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03:55:38.465115 6059 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:291\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.245861 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.262709 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.278171 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.299140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.299178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.299190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.299241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.299253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.402042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.402079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.402091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.402108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.402120 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.505279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.505325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.505363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.505380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.505392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.607749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.607796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.607807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.607824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.607834 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.710655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.711272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.711458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.711628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.711778 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.815097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.815554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.815658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.815761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.815886 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.918478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.918527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.918542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.918562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.918576 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:39Z","lastTransitionTime":"2025-11-23T03:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.933187 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/0.log" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.940646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d"} Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.942014 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.958924 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.975222 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:39 crc kubenswrapper[4751]: I1123 03:55:39.990744 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:39Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.006805 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.020819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.020854 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.020889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.020905 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.020917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.031875 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:38Z\\\",\\\"message\\\":\\\"val\\\\nI1123 03:55:38.463748 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:38.463825 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 03:55:38.463982 6059 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:38.464843 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:55:38.464860 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:55:38.464904 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:55:38.464917 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 03:55:38.464923 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 03:55:38.464961 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:55:38.464970 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:38.464975 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 03:55:38.465009 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 03:55:38.465031 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:38.465064 6059 factory.go:656] Stopping watch factory\\\\nI1123 03:55:38.465089 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03:55:38.465115 6059 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:291\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.061046 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.094234 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.121223 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.123003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.123038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.123047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.123061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.123070 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.134726 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.149160 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.161140 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.200978 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.212556 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.225602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.225683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.225699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.225725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.225741 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.227731 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.246025 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.329039 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.329138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.329164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.329190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.329209 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.422074 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.422257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422309 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:55:56.422278688 +0000 UTC m=+52.615950087 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.422407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.422468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422505 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.422521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422540 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422560 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422578 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422647 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422649 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422668 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422634 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:56.422616527 +0000 UTC m=+52.616287926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422696 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422731 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:56.422706249 +0000 UTC m=+52.616377648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422758 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:56.42274482 +0000 UTC m=+52.616416209 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.422792 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:56.422781391 +0000 UTC m=+52.616452780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.431473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.431564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.431593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.431663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.431681 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.535256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.535387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.535409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.535433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.535482 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.639523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.639591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.639613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.639643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.639665 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.643271 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.643289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.643415 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.643492 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.643685 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.643822 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.743237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.743310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.743333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.743402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.743429 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.846994 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.847055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.847090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.847129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.847152 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.948730 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/1.log" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:40Z","lastTransitionTime":"2025-11-23T03:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.949918 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/0.log" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.955274 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d" exitCode=1 Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.955330 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d"} Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.955421 4751 scope.go:117] "RemoveContainer" containerID="22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.956715 4751 scope.go:117] "RemoveContainer" containerID="7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d" Nov 23 03:55:40 crc kubenswrapper[4751]: E1123 03:55:40.957037 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:55:40 crc kubenswrapper[4751]: I1123 03:55:40.980175 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:40Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.003624 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.026685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.043686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.043751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.043772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.043808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.043832 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.052373 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.070757 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.072563 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.076709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.076757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.076774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.076838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.076860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.097499 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.102616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.102667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.102684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.102708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.102725 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.110109 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22cb71a26c0663ab68bc6cee5b1a1430b55c6c5217fd8a10cdabd5a2cbdc6264\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:38Z\\\",\\\"message\\\":\\\"val\\\\nI1123 03:55:38.463748 6059 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:38.463825 6059 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 03:55:38.463982 6059 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:38.464843 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:55:38.464860 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:55:38.464904 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:55:38.464917 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 03:55:38.464923 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1123 03:55:38.464961 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:55:38.464970 6059 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:38.464975 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 03:55:38.465009 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 03:55:38.465031 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:38.465064 6059 factory.go:656] Stopping watch factory\\\\nI1123 03:55:38.465089 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03:55:38.465115 6059 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:291\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.122544 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.127517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.127579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.127596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.127620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.127638 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.149925 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.150081 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.154922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.154979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.154996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.155018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.155035 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.175867 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.176098 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.176042 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.178651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.178701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.178718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.178741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.178757 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.197075 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.212623 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.231477 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.248493 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.272534 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.281431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.281495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.281517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.281547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.281568 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.292154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.311059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.384572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.384650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.384668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.384744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.384765 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.487288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.487381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.487402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.487430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.487447 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.590417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.590489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.590524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.590554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.590580 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.693567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.693647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.693668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.693696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.693718 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.797641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.797706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.797723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.797750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.797773 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.900832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.900890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.900907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.900934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.900951 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:41Z","lastTransitionTime":"2025-11-23T03:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.962504 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/1.log" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.968784 4751 scope.go:117] "RemoveContainer" containerID="7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d" Nov 23 03:55:41 crc kubenswrapper[4751]: E1123 03:55:41.969141 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:55:41 crc kubenswrapper[4751]: I1123 03:55:41.989675 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:41Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.004526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.004593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.004612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.004636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.004653 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.012109 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.032742 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.048567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.074904 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.096512 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.107810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.107878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.107904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.107935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.107958 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.118925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh"] Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.119613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.122499 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.122615 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.134122 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.156101 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.175327 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.194236 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.209341 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.211610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.211657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.211669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.211688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.211700 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.240812 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.242071 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97ef538a-f241-4f80-9f24-e7160a3a2379-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.242175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.242237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcq7n\" (UniqueName: \"kubernetes.io/projected/97ef538a-f241-4f80-9f24-e7160a3a2379-kube-api-access-rcq7n\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.242340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.263119 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.279795 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.299855 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.315111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.315173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.315191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.315216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.315233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.321218 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.338135 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.342879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.342940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97ef538a-f241-4f80-9f24-e7160a3a2379-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.342972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.343024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcq7n\" (UniqueName: \"kubernetes.io/projected/97ef538a-f241-4f80-9f24-e7160a3a2379-kube-api-access-rcq7n\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.344118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.344623 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97ef538a-f241-4f80-9f24-e7160a3a2379-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.352437 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97ef538a-f241-4f80-9f24-e7160a3a2379-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.358035 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.374496 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcq7n\" (UniqueName: \"kubernetes.io/projected/97ef538a-f241-4f80-9f24-e7160a3a2379-kube-api-access-rcq7n\") pod \"ovnkube-control-plane-749d76644c-7n2gh\" (UID: \"97ef538a-f241-4f80-9f24-e7160a3a2379\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.377125 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.413454 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.418732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.418792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.418810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.418836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.418854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.433300 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.443572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" Nov 23 03:55:42 crc kubenswrapper[4751]: W1123 03:55:42.464471 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ef538a_f241_4f80_9f24_e7160a3a2379.slice/crio-69724a8a31a55649c580ec586f36225bbf341f7f89832312501fbbf2ef6c3902 WatchSource:0}: Error finding container 69724a8a31a55649c580ec586f36225bbf341f7f89832312501fbbf2ef6c3902: Status 404 returned error can't find the container with id 69724a8a31a55649c580ec586f36225bbf341f7f89832312501fbbf2ef6c3902 Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.470211 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.493636 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.513832 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.525828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.525924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.525989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.526018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.526073 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.533166 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.552899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.568156 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.590203 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.610454 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.629048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.629083 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.629094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.629109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.629121 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.631088 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.644295 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:42 crc kubenswrapper[4751]: E1123 03:55:42.644484 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.644554 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.644559 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:42 crc kubenswrapper[4751]: E1123 03:55:42.644773 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:42 crc kubenswrapper[4751]: E1123 03:55:42.644887 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.652065 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.732489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.732540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.732559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.732582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.732599 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.836146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.836228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.836253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.836285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.836307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.869112 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c5nsl"] Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.869887 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:42 crc kubenswrapper[4751]: E1123 03:55:42.869997 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.892788 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.926943 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.939312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.939406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.939424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.939449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.939467 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:42Z","lastTransitionTime":"2025-11-23T03:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.950650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.950741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8992g\" (UniqueName: \"kubernetes.io/projected/81fe3605-5395-4a60-ba10-3a9bad078169-kube-api-access-8992g\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.953516 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.973326 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.973335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" event={"ID":"97ef538a-f241-4f80-9f24-e7160a3a2379","Type":"ContainerStarted","Data":"3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.973558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" event={"ID":"97ef538a-f241-4f80-9f24-e7160a3a2379","Type":"ContainerStarted","Data":"2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.973587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" event={"ID":"97ef538a-f241-4f80-9f24-e7160a3a2379","Type":"ContainerStarted","Data":"69724a8a31a55649c580ec586f36225bbf341f7f89832312501fbbf2ef6c3902"} Nov 23 03:55:42 crc kubenswrapper[4751]: I1123 03:55:42.990851 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:42Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.011282 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.037483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.041548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.041589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.041599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.041617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.041627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.051209 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.051304 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8992g\" (UniqueName: \"kubernetes.io/projected/81fe3605-5395-4a60-ba10-3a9bad078169-kube-api-access-8992g\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:43 crc kubenswrapper[4751]: E1123 03:55:43.051421 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:43 crc kubenswrapper[4751]: E1123 03:55:43.051490 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:43.551472469 +0000 UTC m=+39.745143828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.056100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.070150 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.077665 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8992g\" (UniqueName: \"kubernetes.io/projected/81fe3605-5395-4a60-ba10-3a9bad078169-kube-api-access-8992g\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.081818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.094276 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.107328 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.119988 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.128432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.141299 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.144505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.144547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.144556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.144569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.144580 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.156113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.171114 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.185057 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.196758 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.211308 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.225520 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.236685 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.247079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.247133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.247151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.247172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.247190 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.250210 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.270974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.294014 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.319288 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.336710 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.349930 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.349986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.350002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.350028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.350045 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.353680 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.385515 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.403722 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.435282 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.452684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.452727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.452744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.452781 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.452797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.455431 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.474499 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.491529 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:43Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555238 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.555345 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:43 crc kubenswrapper[4751]: E1123 03:55:43.555625 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:43 crc kubenswrapper[4751]: E1123 03:55:43.555733 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:44.555702165 +0000 UTC m=+40.749373564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.658482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.658550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.658567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.658592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.658609 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.761825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.761874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.761891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.761918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.761935 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.864700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.864761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.864779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.864807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.864825 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.967577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.967645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.967664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.967689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:43 crc kubenswrapper[4751]: I1123 03:55:43.967707 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:43Z","lastTransitionTime":"2025-11-23T03:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.070478 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.070540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.070558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.070584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.070604 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.173872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.173947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.173973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.174001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.174019 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.277469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.277530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.277547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.277570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.277588 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.380603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.380647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.380660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.380682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.380697 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.483968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.484028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.484047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.484068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.484080 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.567021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.567265 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.567437 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:46.567406171 +0000 UTC m=+42.761077560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.586810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.586943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.586965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.587001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.587031 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.643643 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.643717 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.643773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.643660 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.643939 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.644712 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.644893 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:44 crc kubenswrapper[4751]: E1123 03:55:44.645051 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.668063 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.686232 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.690873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.690937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.690956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.690981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.690998 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.707818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.730484 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.750425 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.772569 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.792485 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.794623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.794705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.794724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.794752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.794771 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.809085 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.834105 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.856218 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.874955 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.897931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.898004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.898022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.898049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.898070 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:44Z","lastTransitionTime":"2025-11-23T03:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.905870 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.926220 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.960523 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:44 crc kubenswrapper[4751]: I1123 03:55:44.978661 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:44Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.002200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.002267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.002293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.002325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.002382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.006218 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:45Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.029444 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:45Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.105602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.105643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.105654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.105672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.105685 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.208890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.208976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.208999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.209026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.209045 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.311903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.311963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.311980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.312003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.312024 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.415760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.415818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.415834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.415858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.415878 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.519096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.519156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.519173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.519198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.519217 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.622544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.622613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.622631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.622656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.622679 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.726199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.726262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.726304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.726328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.726370 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.829602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.829668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.829687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.829714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.829734 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.933105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.933208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.933233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.933268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:45 crc kubenswrapper[4751]: I1123 03:55:45.933301 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:45Z","lastTransitionTime":"2025-11-23T03:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.035796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.036239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.036497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.036704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.036851 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.140409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.140483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.140507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.140535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.140556 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.243741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.243902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.243927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.243948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.243964 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.346255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.346306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.346318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.346335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.346366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.449684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.449771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.449797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.449830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.449854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.552585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.552644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.552662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.552689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.552707 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.587997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.588174 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.588252 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:50.588228155 +0000 UTC m=+46.781899554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.643416 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.643462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.643497 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.643580 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.643591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.643725 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.643885 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:46 crc kubenswrapper[4751]: E1123 03:55:46.643998 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.654887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.654935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.654951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.654971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.654988 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.757800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.757877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.757895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.757922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.757939 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.860448 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.860527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.860550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.860582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.860601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.963950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.964017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.964039 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.964069 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:46 crc kubenswrapper[4751]: I1123 03:55:46.964091 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:46Z","lastTransitionTime":"2025-11-23T03:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.066933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.066999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.067017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.067041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.067062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.169430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.169480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.169495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.169519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.169536 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.272818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.272873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.272890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.272912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.272928 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.376070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.376135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.376154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.376180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.376197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.478587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.478658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.478676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.478700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.478717 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.581817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.581876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.581895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.581920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.581937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.644603 4751 scope.go:117] "RemoveContainer" containerID="a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.685397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.685480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.685504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.685536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.685563 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.788891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.788971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.788995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.789031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.789055 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.891084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.891147 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.891171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.891203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.891226 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.994028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.994106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.994127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.994158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.994180 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:47Z","lastTransitionTime":"2025-11-23T03:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.995822 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 03:55:47 crc kubenswrapper[4751]: I1123 03:55:47.998759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.000293 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.022411 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.037380 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.056783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.074341 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.097472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.097543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.097567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.097598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.097622 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.102894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.120940 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.138793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.158208 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.177894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.196879 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.200627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.200667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.200678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.200696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.200709 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.216692 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.234324 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.248835 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.273718 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.287172 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.302886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.302935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.302953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.302977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.302995 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.318040 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.331695 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:48Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.405979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.406010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.406021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.406035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.406046 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.508244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.508291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.508308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.508331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.508380 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.611616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.611677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.611698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.611721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.611739 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.643949 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.643967 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.644024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.644036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:48 crc kubenswrapper[4751]: E1123 03:55:48.644118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:48 crc kubenswrapper[4751]: E1123 03:55:48.644231 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:48 crc kubenswrapper[4751]: E1123 03:55:48.644372 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:48 crc kubenswrapper[4751]: E1123 03:55:48.644490 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.714800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.714866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.714885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.714913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.714936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.818399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.818459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.818476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.818501 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.818518 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.921438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.921485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.921498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.921518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:48 crc kubenswrapper[4751]: I1123 03:55:48.921532 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:48Z","lastTransitionTime":"2025-11-23T03:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.024695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.024768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.024791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.024825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.024846 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.126828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.126877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.126889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.126907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.126919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.229778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.229820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.229831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.229846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.229858 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.332372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.332442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.332458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.332506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.332519 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.434897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.434964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.434981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.435005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.435022 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.537940 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.537997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.538016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.538040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.538056 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.641398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.641444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.641455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.641471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.641483 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.744812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.744858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.744869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.744885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.744896 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.847988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.848068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.848085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.848111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.848134 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.951243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.951300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.951317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.951373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:49 crc kubenswrapper[4751]: I1123 03:55:49.951392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:49Z","lastTransitionTime":"2025-11-23T03:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.053949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.054087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.054153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.054180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.054197 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.157256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.157386 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.157405 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.157430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.157448 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.260180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.260244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.260269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.260299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.260385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.363796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.363859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.363877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.363899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.363916 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.468076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.468139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.468159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.468184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.468201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.571805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.571866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.571887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.571920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.571942 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.632983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.633174 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.633317 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:55:58.633252482 +0000 UTC m=+54.826923871 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.643164 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.643277 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.643289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.643499 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.643588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.643689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.643904 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:50 crc kubenswrapper[4751]: E1123 03:55:50.643970 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.675225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.675296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.675319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.675399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.675427 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.779639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.779705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.779723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.779751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.779767 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.882849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.882928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.882947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.882971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.882987 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.985397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.985453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.985469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.985490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:50 crc kubenswrapper[4751]: I1123 03:55:50.985508 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:50Z","lastTransitionTime":"2025-11-23T03:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.088321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.088404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.088419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.088441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.088459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.191434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.191488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.191502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.191524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.191540 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.294847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.294907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.294924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.294982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.295003 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.398764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.398827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.398845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.398870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.398888 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.462544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.462733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.462770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.462801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.462825 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.486308 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:51Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.491776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.491830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.491846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.491868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.491885 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.511954 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:51Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.517893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.517997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.518017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.518042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.518060 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.537916 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:51Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.542773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.542871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.542895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.542923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.542945 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.563892 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:51Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.568986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.569036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.569056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.569085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.569108 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.588862 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:51Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:51 crc kubenswrapper[4751]: E1123 03:55:51.589002 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.591271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.591323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.591340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.591414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.591434 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.694165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.694223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.694245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.694290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.694314 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.797066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.797116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.797133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.797153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.797168 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.900246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.900433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.900462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.900488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:51 crc kubenswrapper[4751]: I1123 03:55:51.900506 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:51Z","lastTransitionTime":"2025-11-23T03:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.003027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.003088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.003106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.003131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.003154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.106422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.106500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.106525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.106555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.106577 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.209536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.209612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.209646 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.209679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.209699 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.314234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.314299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.314317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.314342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.314424 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.416634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.416699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.416717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.416740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.416756 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.519491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.519552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.519570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.519593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.519614 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.623288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.623398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.623427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.623456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.623478 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.644061 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:52 crc kubenswrapper[4751]: E1123 03:55:52.644173 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.644235 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.644262 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:52 crc kubenswrapper[4751]: E1123 03:55:52.644412 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.644496 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:52 crc kubenswrapper[4751]: E1123 03:55:52.644671 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:52 crc kubenswrapper[4751]: E1123 03:55:52.644885 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.726207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.726248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.726260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.726276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.726289 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.833042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.833103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.833120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.833146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.833164 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.935726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.935788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.935808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.935835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:52 crc kubenswrapper[4751]: I1123 03:55:52.935858 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:52Z","lastTransitionTime":"2025-11-23T03:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.039766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.039817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.039829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.039848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.039862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.142956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.143018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.143028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.143042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.143050 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.245759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.245844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.245868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.245898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.245916 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.349508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.349591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.349612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.349636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.349654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.452963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.453733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.453777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.453803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.453821 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.556862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.556930 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.556948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.556972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.556990 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.645050 4751 scope.go:117] "RemoveContainer" containerID="7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.659683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.659745 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.659763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.659788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.659806 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.763402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.763717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.763734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.763758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.763776 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.868328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.868431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.868457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.868492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.868516 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.971737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.971770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.971781 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.971797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:53 crc kubenswrapper[4751]: I1123 03:55:53.971809 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:53Z","lastTransitionTime":"2025-11-23T03:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.026908 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/1.log" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.031918 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.032712 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.052818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.068619 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.074181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.074229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.074247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.074273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.074291 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.089796 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.104579 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.124850 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.148663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.171405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.177806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.177880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.177902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.177933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.177956 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.185984 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.209048 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.244521 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.262692 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.280705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.280860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.280876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.280896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.280911 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.294676 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.311218 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.338836 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.353101 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.367206 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.379249 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.383031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.383075 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.383088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.383106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.383118 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.485679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.485723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.485733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.485747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.485758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.587870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.587916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.587927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.587942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.587953 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.643916 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.643968 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.643919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.644043 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:54 crc kubenswrapper[4751]: E1123 03:55:54.644028 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:54 crc kubenswrapper[4751]: E1123 03:55:54.644118 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:54 crc kubenswrapper[4751]: E1123 03:55:54.644282 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:54 crc kubenswrapper[4751]: E1123 03:55:54.644389 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.658850 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.669901 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.687839 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.689726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.689755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.689765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.689779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.689792 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.701158 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.728551 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.740827 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.752161 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.763296 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.777728 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.786384 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.792435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.792473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.792484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.792498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.792510 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.799623 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.810479 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.824725 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.836705 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.848208 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.865469 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.886334 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:54Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.895172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.895219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.895235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.895257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.895275 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.997899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.997968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.997991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.998020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:54 crc kubenswrapper[4751]: I1123 03:55:54.998041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:54Z","lastTransitionTime":"2025-11-23T03:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.039988 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/2.log" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.041327 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/1.log" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.046259 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" exitCode=1 Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.046329 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.046498 4751 scope.go:117] "RemoveContainer" containerID="7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.047465 4751 scope.go:117] "RemoveContainer" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" Nov 23 03:55:55 crc kubenswrapper[4751]: E1123 03:55:55.047658 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.067901 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.083697 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.100743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.100794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.100822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.100845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.100864 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.104599 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.120261 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.144924 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.161783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.184117 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.203713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.203768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.203786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.203809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.203826 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.215191 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7860b9ef7d28658c9050a576fe0dff1575ed297dfefa38fdfda83cb0a929f59d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:40Z\\\",\\\"message\\\":\\\"0\\\\nI1123 03:55:39.896336 6180 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896725 6180 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.896880 6180 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:55:39.897229 6180 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 03:55:39.897332 6180 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 03:55:39.897376 6180 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:55:39.897415 6180 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 03:55:39.897429 6180 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 03:55:39.897452 6180 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:55:39.897484 6180 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 03:55:39.897497 6180 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:55:39.897497 6180 factory.go:656] Stopping watch factory\\\\nI1123 03:55:39.897512 6180 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 03:55:39.897519 6180 ovnkube.go:599] Stopped ovnkube\\\\nI1123 03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.235604 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.267524 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.287280 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.307446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.307556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.307581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.307611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.307634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.309007 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.330771 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.349028 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.367793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.383562 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.404928 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:55Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.411546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.411613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.411633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.411657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.411675 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.514460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.514511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.514531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.514554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.514571 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.621730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.622266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.622485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.622523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.622552 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.725946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.726027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.726053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.726085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.726109 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.829491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.829588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.829607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.829633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.829648 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.933293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.933379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.933393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.933411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:55 crc kubenswrapper[4751]: I1123 03:55:55.933423 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:55Z","lastTransitionTime":"2025-11-23T03:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.036901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.036961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.036977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.037001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.037018 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.052881 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/2.log" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.059132 4751 scope.go:117] "RemoveContainer" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.059425 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.078099 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.098411 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.117055 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.138188 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.140304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.140401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.140426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.140454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.140480 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.157715 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.186457 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.209936 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.224937 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.243403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.243453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.243464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.243481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.243493 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.253966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.270231 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.299977 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.326268 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.345627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.345672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.345684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.345701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.345712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.354690 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.373426 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.386610 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.396297 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.407805 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:56Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.448288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.448336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.448367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.448385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.448403 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.500288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500525 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:56:28.500494775 +0000 UTC m=+84.694166174 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.500623 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.500672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.500720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.500766 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500852 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500888 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500891 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500908 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500913 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500953 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:28.500936906 +0000 UTC m=+84.694608305 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500893 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500975 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:28.500964267 +0000 UTC m=+84.694635666 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500981 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.500995 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.501025 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:28.500997668 +0000 UTC m=+84.694669067 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.501066 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:28.501051079 +0000 UTC m=+84.694722568 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.551653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.551694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.551703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.551717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.551729 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.643498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.643549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.643616 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.643675 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.643682 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.643833 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.644014 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:56 crc kubenswrapper[4751]: E1123 03:55:56.644172 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.654842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.654904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.654923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.654970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.654994 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.758759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.758821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.758839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.758862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.758879 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.861473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.861540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.861553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.861575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.861586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.964212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.964258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.964269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.964288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:56 crc kubenswrapper[4751]: I1123 03:55:56.964301 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:56Z","lastTransitionTime":"2025-11-23T03:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.067467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.067540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.067557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.067582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.067659 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.170779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.170849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.170873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.170904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.170927 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.273041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.273103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.273127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.273157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.273179 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.376654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.376710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.376727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.376750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.376772 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.479512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.479638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.479656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.479680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.479696 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.582296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.582348 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.582390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.582411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.582427 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.685523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.685592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.685609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.685631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.685648 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.789639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.789703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.789721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.789745 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.789762 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.812564 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.827658 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.831899 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.850564 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.869151 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.889794 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.892247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.892311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.892328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.892382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.892401 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.906229 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.930774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.961426 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.976838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:57Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.994661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.994724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.994742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.994768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:57 crc kubenswrapper[4751]: I1123 03:55:57.994784 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:57Z","lastTransitionTime":"2025-11-23T03:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.010584 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.028621 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.060754 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.077984 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.098340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.098437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.098460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.098489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.098510 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.103677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.119216 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.136844 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.151962 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.167007 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:55:58Z is after 2025-08-24T17:21:41Z" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.201292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.201377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.201401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.201431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.201455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.304483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.304532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.304544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.304561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.304575 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.406561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.406602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.406617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.406634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.406646 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.510878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.511193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.511290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.511414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.511518 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.615230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.615279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.615298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.615320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.615337 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.644065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.644112 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.644174 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.644280 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.644405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.644604 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.644738 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.645146 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.718922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.719301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.719504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.719661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.719803 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.726729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.726923 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:58 crc kubenswrapper[4751]: E1123 03:55:58.727038 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:14.727007945 +0000 UTC m=+70.920679334 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.823383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.823452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.823470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.823494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.823512 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.926788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.926863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.926886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.926916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:58 crc kubenswrapper[4751]: I1123 03:55:58.926938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:58Z","lastTransitionTime":"2025-11-23T03:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.029639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.029690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.029709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.029732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.029748 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.136632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.136732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.136793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.136822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.136838 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.240428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.240494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.240511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.240538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.240557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.343926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.343989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.344008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.344032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.344049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.447506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.447574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.447591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.447614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.447631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.551110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.551411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.551424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.551449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.551464 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.653666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.653744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.653762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.653794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.653814 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.757268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.757731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.757750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.757777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.757797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.861112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.861176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.861199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.861228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.861248 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.965140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.965224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.965250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.965288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:55:59 crc kubenswrapper[4751]: I1123 03:55:59.965311 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:55:59Z","lastTransitionTime":"2025-11-23T03:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.069418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.069496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.069522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.069553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.069579 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.172897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.172944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.172956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.172978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.172992 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.276466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.276505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.276515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.276529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.276540 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.379748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.379789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.379825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.379845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.379858 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.481857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.481901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.481912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.481929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.481941 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.584692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.584754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.584777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.584807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.584854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.643422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.643493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:00 crc kubenswrapper[4751]: E1123 03:56:00.643621 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.643655 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.643746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:00 crc kubenswrapper[4751]: E1123 03:56:00.643842 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:00 crc kubenswrapper[4751]: E1123 03:56:00.643964 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:00 crc kubenswrapper[4751]: E1123 03:56:00.644057 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.687616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.687661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.687678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.687700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.687719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.733173 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.757549 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.778436 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.791813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.791872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.791890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.791914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.791931 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.800927 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.832096 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.850260 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.869047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.896017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.896066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.896087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.896112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.896130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.905726 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.927776 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.949083 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.965334 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.988498 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:00Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.999256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.999307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.999326 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.999375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:00 crc kubenswrapper[4751]: I1123 03:56:00.999399 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:00Z","lastTransitionTime":"2025-11-23T03:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.006502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.029243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.045939 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.064043 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.083980 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.102852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.102932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.102956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.102989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.103012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.104935 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.126136 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.205980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.206050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.206076 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.206104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.206128 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.309522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.309591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.309607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.309631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.309649 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.412750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.412985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.413031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.413061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.413084 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.515713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.515791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.515812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.515841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.515864 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.619388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.619465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.619486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.619510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.619529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.722710 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.722823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.722844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.722870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.722887 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.826318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.826427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.826446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.826475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.826492 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.930238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.930298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.930315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.930339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.930392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.938040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.938086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.938105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.938127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.938144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: E1123 03:56:01.959504 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.964977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.965539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.965584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.965619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.965666 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:01 crc kubenswrapper[4751]: E1123 03:56:01.986629 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:01Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.990787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.990840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.990858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.990882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:01 crc kubenswrapper[4751]: I1123 03:56:01.990898 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:01Z","lastTransitionTime":"2025-11-23T03:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.009896 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:02Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.014432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.014485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.014508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.014578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.014604 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.033785 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:02Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.038222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.038263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.038280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.038302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.038321 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.053189 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:02Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.053429 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.055666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.055713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.055729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.055751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.055767 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.159161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.159198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.159207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.159221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.159230 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.262184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.262247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.262267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.262292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.262311 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.364932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.365012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.365022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.365038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.365048 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.467807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.467879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.467902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.467933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.467955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.570882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.570944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.570971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.571007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.571033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.643871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.644001 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.644240 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.644271 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.644028 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.644431 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.644539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:02 crc kubenswrapper[4751]: E1123 03:56:02.644754 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.673719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.673797 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.673816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.673840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.673860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.776744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.776805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.776841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.776876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.776901 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.880754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.880833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.880854 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.880880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.880898 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.983974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.984075 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.984131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.984157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:02 crc kubenswrapper[4751]: I1123 03:56:02.984174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:02Z","lastTransitionTime":"2025-11-23T03:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.086965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.087026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.087043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.087065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.087086 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.190052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.190105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.190122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.190145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.190163 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.293519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.293582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.293600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.293623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.293640 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.396146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.396231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.396256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.396287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.396310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.500416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.500507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.500531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.500562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.500592 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.604394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.604707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.605001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.605199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.605441 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.709150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.709530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.709678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.709818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.709945 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.812733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.813511 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.813554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.813577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.813589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.917227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.917301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.917319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.917375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:03 crc kubenswrapper[4751]: I1123 03:56:03.917394 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:03Z","lastTransitionTime":"2025-11-23T03:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.020725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.020809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.020831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.020860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.020881 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.124030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.124106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.124129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.124158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.124178 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.227377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.227425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.227441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.227464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.227481 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.330425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.330502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.330526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.330554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.330577 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.433507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.433575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.433593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.433616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.433631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.537238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.537304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.537321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.537418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.537470 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.641991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.642053 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.642075 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.642103 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.642122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.643304 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.643412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.643788 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:04 crc kubenswrapper[4751]: E1123 03:56:04.643750 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.643873 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:04 crc kubenswrapper[4751]: E1123 03:56:04.643964 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:04 crc kubenswrapper[4751]: E1123 03:56:04.644058 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:04 crc kubenswrapper[4751]: E1123 03:56:04.644192 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.671591 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.691691 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.707507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.737628 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.745626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.745687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.745709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.745737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.745760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.755792 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.776873 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.800759 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.826826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.839392 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.848507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.848549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.848559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.848577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.848589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.853680 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.872144 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.887018 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.909216 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.925941 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.940428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.950713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.950753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.950767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.950783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.950797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:04Z","lastTransitionTime":"2025-11-23T03:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.953975 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.971021 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:04 crc kubenswrapper[4751]: I1123 03:56:04.985124 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:04Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.053608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.053678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.053695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.053721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.053738 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.157040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.157079 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.157092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.157112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.157126 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.260202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.260410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.260436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.260459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.260475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.363154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.363235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.363258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.363287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.363309 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.465960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.466004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.466020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.466041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.466056 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.569269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.569329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.569391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.569422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.569443 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.672210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.672406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.672429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.672451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.672467 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.775427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.775487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.775505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.775526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.775543 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.878496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.878551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.878574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.878607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.878629 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.981417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.981492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.981515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.981544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:05 crc kubenswrapper[4751]: I1123 03:56:05.981569 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:05Z","lastTransitionTime":"2025-11-23T03:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.084852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.084928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.084952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.084982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.085004 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.188008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.188102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.188127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.188197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.188214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.291302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.291404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.291427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.291451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.291468 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.394998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.395071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.395116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.395143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.395161 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.497981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.498060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.498082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.498110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.498128 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.600890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.600956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.600974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.601001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.601025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.643108 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.643217 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.643118 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:06 crc kubenswrapper[4751]: E1123 03:56:06.643287 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.643220 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:06 crc kubenswrapper[4751]: E1123 03:56:06.643381 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:06 crc kubenswrapper[4751]: E1123 03:56:06.643535 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:06 crc kubenswrapper[4751]: E1123 03:56:06.643764 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.703992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.704070 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.704089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.704118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.704140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.807890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.807960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.807983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.808013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.808036 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.911748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.911832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.911850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.911873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:06 crc kubenswrapper[4751]: I1123 03:56:06.911890 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:06Z","lastTransitionTime":"2025-11-23T03:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.014467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.014527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.014544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.014567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.014585 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.116559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.116624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.116643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.116667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.116685 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.219649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.219714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.219731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.219754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.219772 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.322911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.322959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.322969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.322986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.322997 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.426377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.426473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.426502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.426526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.426543 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.529243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.529280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.529289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.529301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.529311 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.633169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.633250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.633272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.633298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.633325 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.736464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.736549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.736574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.736607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.736630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.840123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.840229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.840254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.840859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.841166 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.948042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.948099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.948117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.948145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:07 crc kubenswrapper[4751]: I1123 03:56:07.948163 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:07Z","lastTransitionTime":"2025-11-23T03:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.051319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.051408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.051426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.051450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.051466 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.154808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.154890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.154915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.154948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.154971 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.257985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.258048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.258060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.258078 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.258089 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.360612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.360669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.360686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.360708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.360724 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.463476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.463535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.463551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.463575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.463593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.566049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.566164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.566188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.566219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.566243 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.644026 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.644040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.644470 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.644507 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:08 crc kubenswrapper[4751]: E1123 03:56:08.644657 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:08 crc kubenswrapper[4751]: E1123 03:56:08.644777 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.644977 4751 scope.go:117] "RemoveContainer" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" Nov 23 03:56:08 crc kubenswrapper[4751]: E1123 03:56:08.645171 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:08 crc kubenswrapper[4751]: E1123 03:56:08.645019 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:08 crc kubenswrapper[4751]: E1123 03:56:08.645310 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.668746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.668800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.668817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.668854 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.668873 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.772115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.772168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.772186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.772211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.772231 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.875243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.875297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.875313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.875334 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.875405 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.978489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.978548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.978567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.978592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:08 crc kubenswrapper[4751]: I1123 03:56:08.978610 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:08Z","lastTransitionTime":"2025-11-23T03:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.080984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.081043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.081120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.081144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.081537 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.183966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.184219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.184322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.184445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.184544 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.287967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.288193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.288292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.288411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.288507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.391218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.391279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.391295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.391319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.391337 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.493547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.493601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.493617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.493639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.493656 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.596386 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.596433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.596445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.596461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.596473 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.699300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.699396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.699420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.699450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.699468 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.801891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.801946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.801963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.801986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.802004 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.904708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.904746 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.904762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.904784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:09 crc kubenswrapper[4751]: I1123 03:56:09.904801 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:09Z","lastTransitionTime":"2025-11-23T03:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.007080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.007153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.007170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.007194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.007212 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.109936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.109994 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.110010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.110032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.110049 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.213021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.213089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.213107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.213131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.213148 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.315657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.315701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.315712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.315728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.315738 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.419064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.419125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.419143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.419166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.419183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.521547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.521620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.521642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.521671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.521690 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.624106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.624163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.624181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.624207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.624225 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.643455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.643498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.643501 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:10 crc kubenswrapper[4751]: E1123 03:56:10.643606 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.643718 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:10 crc kubenswrapper[4751]: E1123 03:56:10.643879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:10 crc kubenswrapper[4751]: E1123 03:56:10.644059 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:10 crc kubenswrapper[4751]: E1123 03:56:10.644124 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.726829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.726884 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.726902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.726927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.726944 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.829960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.830213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.830224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.830241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.830256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.932719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.932784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.932802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.932831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:10 crc kubenswrapper[4751]: I1123 03:56:10.932850 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:10Z","lastTransitionTime":"2025-11-23T03:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.035162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.035226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.035244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.035271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.035289 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.138299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.138414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.138441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.138472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.138498 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.240779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.240841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.240858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.240885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.240906 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.343518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.343561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.343572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.343588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.343600 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.445541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.445578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.445589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.445604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.445617 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.548077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.548124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.548136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.548154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.548167 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.650641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.650681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.650694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.650709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.650721 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.753474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.753522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.753530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.753544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.753555 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.856716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.856772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.856789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.856813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.856829 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.959932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.960005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.960023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.960046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:11 crc kubenswrapper[4751]: I1123 03:56:11.960065 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:11Z","lastTransitionTime":"2025-11-23T03:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.062126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.062168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.062180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.062195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.062206 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.164861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.164912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.164931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.164954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.164973 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.221019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.221067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.221083 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.221109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.221123 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.239758 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:12Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.244216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.244276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.244294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.244318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.244335 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.264093 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:12Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.268914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.268973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.268992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.269017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.269036 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.283783 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:12Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.287918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.287958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.287969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.287986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.287999 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.305680 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:12Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.310248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.310332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.310397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.310420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.310438 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.326987 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:12Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.327222 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.329235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.329310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.329329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.329393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.329411 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.432694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.432761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.432780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.432804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.432825 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.536117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.536173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.536192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.536216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.536234 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.639756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.639824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.639846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.639875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.639898 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.643441 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.643469 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.643584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.643607 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.643666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.643812 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.643995 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:12 crc kubenswrapper[4751]: E1123 03:56:12.644111 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.743168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.743231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.743252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.743278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.743296 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.846204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.846268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.846287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.846312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.846330 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.949052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.949124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.949175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.949204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:12 crc kubenswrapper[4751]: I1123 03:56:12.949223 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:12Z","lastTransitionTime":"2025-11-23T03:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.052149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.052200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.052212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.052230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.052243 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.154704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.154760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.154803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.154827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.154843 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.256908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.256961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.256980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.257003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.257020 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.359584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.359649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.359669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.359695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.359712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.462256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.462297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.462310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.462332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.462376 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.564993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.565055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.565072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.565097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.565113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.667789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.667839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.667857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.667878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.667896 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.770199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.770251 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.770263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.770282 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.770293 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.872792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.872891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.872910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.872933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.872984 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.975599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.975642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.975653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.975671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:13 crc kubenswrapper[4751]: I1123 03:56:13.975725 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:13Z","lastTransitionTime":"2025-11-23T03:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.078798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.078864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.078889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.078921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.078946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.181088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.181138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.181182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.181207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.181223 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.283795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.283849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.283865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.283887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.283904 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.386446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.386499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.386516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.386538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.386554 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.488222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.488267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.488275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.488290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.488300 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.590507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.590565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.590573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.590586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.590595 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.643182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.643225 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.643202 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.643189 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.643300 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.643405 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.643475 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.643531 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.656532 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.676505 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.687619 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.693176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.693236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.693271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.693304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.693326 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.698464 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.718657 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.733427 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.751180 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.761684 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.770717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.785503 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796200 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.796472 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.798495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.798633 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:56:14 crc kubenswrapper[4751]: E1123 03:56:14.798681 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:56:46.798668788 +0000 UTC m=+102.992340147 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.809595 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.822193 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.833696 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.843752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.852285 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.864377 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.879879 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:14Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.898836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.898891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.898902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.898916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:14 crc kubenswrapper[4751]: I1123 03:56:14.898924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:14Z","lastTransitionTime":"2025-11-23T03:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.003093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.003170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.003185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.003223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.003239 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.106876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.106942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.106953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.106969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.106981 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.209740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.209793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.209803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.209824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.209861 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.313152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.313197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.313207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.313224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.313234 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.417531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.417580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.417593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.417612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.417625 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.520398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.520442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.520451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.520466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.520477 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.623451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.623532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.623550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.623578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.623603 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.726556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.726620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.726638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.726661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.726708 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.829090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.829141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.829158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.829181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.829199 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.932080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.932148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.932172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.932202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:15 crc kubenswrapper[4751]: I1123 03:56:15.932226 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:15Z","lastTransitionTime":"2025-11-23T03:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.035567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.035632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.035650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.035676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.035694 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.138500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.138580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.138598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.138623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.138641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.240937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.240960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.240968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.240981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.240989 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.343195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.343239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.343252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.343269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.343281 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.445532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.445576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.445592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.445613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.445630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.548960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.549016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.549034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.549056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.549075 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.643694 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.643796 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.643805 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.643818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:16 crc kubenswrapper[4751]: E1123 03:56:16.643936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:16 crc kubenswrapper[4751]: E1123 03:56:16.644096 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:16 crc kubenswrapper[4751]: E1123 03:56:16.644185 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:16 crc kubenswrapper[4751]: E1123 03:56:16.644379 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.650872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.650909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.650920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.650935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.650947 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.753323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.753402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.753420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.753441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.753459 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.855241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.855293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.855314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.855338 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.855385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.958330 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.958408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.958425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.958446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:16 crc kubenswrapper[4751]: I1123 03:56:16.958463 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:16Z","lastTransitionTime":"2025-11-23T03:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.060518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.060558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.060572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.060590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.060604 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.126973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/0.log" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.127029 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee318377-acb2-4f75-9414-02313f3824e0" containerID="adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a" exitCode=1 Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.127059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerDied","Data":"adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.127606 4751 scope.go:117] "RemoveContainer" containerID="adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.160468 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.162619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.162642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.162652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.162665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.162674 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.174519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.197841 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.215810 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.235867 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.255294 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.266020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.266095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.266112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.266163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.266181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.270507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.282327 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.292485 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.309113 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.322818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.351838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.369216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.369256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.369269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.369303 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.369317 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.376927 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.402966 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.415772 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.424389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.433465 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.444779 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:17Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.471402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.471421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.471429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.471441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.471449 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.573496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.573544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.573561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.573584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.573601 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.676765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.676842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.676869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.676900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.676921 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.780018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.780080 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.780097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.780120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.780137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.882804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.882876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.882894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.882917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.882934 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.985871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.985952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.985972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.985995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:17 crc kubenswrapper[4751]: I1123 03:56:17.986013 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:17Z","lastTransitionTime":"2025-11-23T03:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.088436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.088871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.089020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.089163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.089322 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.132806 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/0.log" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.132868 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerStarted","Data":"226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.153831 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.166498 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.181405 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.191977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.192014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.192026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.192041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.192052 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.201575 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.220434 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.233226 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.256449 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.272692 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.284998 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.294219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.294391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.294422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.294446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.294464 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.305275 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.326018 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.341450 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.371789 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.387587 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.397054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.397128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.397141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.397158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.397170 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.407554 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.427018 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.442097 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.457722 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:18Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.500469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.500823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.501045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.501236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.501497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.604676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.604952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.605038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.605236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.605330 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.655361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:18 crc kubenswrapper[4751]: E1123 03:56:18.655621 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.655686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.655640 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:18 crc kubenswrapper[4751]: E1123 03:56:18.655905 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:18 crc kubenswrapper[4751]: E1123 03:56:18.655960 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.656474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:18 crc kubenswrapper[4751]: E1123 03:56:18.656598 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.708642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.708697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.708715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.708737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.708755 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.815342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.815497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.815518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.815543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.815599 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.918642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.918685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.918697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.918731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:18 crc kubenswrapper[4751]: I1123 03:56:18.918745 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:18Z","lastTransitionTime":"2025-11-23T03:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.021238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.021268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.021277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.021302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.021310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.123963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.124017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.124033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.124058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.124079 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.227632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.227975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.228159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.228329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.228533 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.331852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.332711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.332899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.333620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.333795 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.436908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.436966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.436983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.437006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.437023 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.540490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.540550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.540568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.540591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.540611 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.643992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.644047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.644063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.644087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.644104 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.746754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.746804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.746819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.746841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.746857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.849151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.849225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.849251 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.849278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.849298 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.951784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.951904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.951923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.951947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:19 crc kubenswrapper[4751]: I1123 03:56:19.951964 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:19Z","lastTransitionTime":"2025-11-23T03:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.054850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.054927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.054952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.054981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.055004 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.157317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.157425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.157450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.157479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.157501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.260176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.260232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.260249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.260271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.260288 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.362493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.362547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.362564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.362586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.362604 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.464942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.465013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.465037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.465066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.465087 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.567829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.567901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.567919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.567942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.567961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.643703 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.643757 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:20 crc kubenswrapper[4751]: E1123 03:56:20.643918 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.643953 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:20 crc kubenswrapper[4751]: E1123 03:56:20.644141 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.644242 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:20 crc kubenswrapper[4751]: E1123 03:56:20.644322 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:20 crc kubenswrapper[4751]: E1123 03:56:20.644589 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.670859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.670931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.670954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.670978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.670995 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.774872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.774934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.774956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.774980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.774997 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.877983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.878047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.878065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.878087 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.878103 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.981581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.981637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.981661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.981692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:20 crc kubenswrapper[4751]: I1123 03:56:20.981716 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:20Z","lastTransitionTime":"2025-11-23T03:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.084691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.084732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.084741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.084754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.084763 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.186962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.187019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.187036 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.187060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.187077 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.290060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.290117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.290133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.290157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.290174 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.392605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.392665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.392688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.392712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.392735 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.495700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.495763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.495780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.495808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.495825 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.599387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.599456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.599482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.599512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.599535 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.703055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.703122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.703145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.703172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.703193 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.805781 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.805826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.805844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.805868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.805885 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.908245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.908296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.908314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.908335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:21 crc kubenswrapper[4751]: I1123 03:56:21.908391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:21Z","lastTransitionTime":"2025-11-23T03:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.011102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.011152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.011188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.011218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.011261 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.113846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.113912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.113934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.113960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.113979 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.217692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.217770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.217798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.217831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.217855 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.320965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.321014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.321027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.321043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.321055 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.386432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.386489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.386506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.386531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.386548 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.407413 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:22Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.412281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.412415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.412433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.412457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.412475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.432690 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:22Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.437309 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.437395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.437416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.437438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.437457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.457747 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:22Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.463438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.463520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.463544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.463572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.463597 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.483341 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:22Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.488000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.488061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.488077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.488102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.488120 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.508731 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:22Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.508882 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.510497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.510546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.510565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.510589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.510607 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.613654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.613751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.613788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.613819 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.613840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.644080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.644130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.644268 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.644384 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.644457 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.644565 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.645189 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:22 crc kubenswrapper[4751]: E1123 03:56:22.645317 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.645754 4751 scope.go:117] "RemoveContainer" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.716241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.716653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.716671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.716694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.716714 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.820154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.820206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.820222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.820248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.820270 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.922672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.922725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.922741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.922765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:22 crc kubenswrapper[4751]: I1123 03:56:22.922784 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:22Z","lastTransitionTime":"2025-11-23T03:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.025941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.026019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.026038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.026064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.026083 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.129049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.129128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.129154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.129182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.129205 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.153676 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/2.log" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.157249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.158308 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.186325 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.209926 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232371 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.232616 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.307741 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.331854 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.334616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.334656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.334665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.334679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.334689 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.345157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.359247 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.371276 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.387815 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.400830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.412297 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.426580 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.437321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.437366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.437375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.437388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.437397 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.440581 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.455185 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.470297 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.489263 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.514086 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.532504 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:23Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.540629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.540652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.540661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.540675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.540685 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.643217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.643244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.643251 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.643262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.643270 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.746439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.746494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.746513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.746535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.746553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.850201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.850234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.850244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.850259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.850271 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.953009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.953054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.953066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.953088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:23 crc kubenswrapper[4751]: I1123 03:56:23.953100 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:23Z","lastTransitionTime":"2025-11-23T03:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.056283 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.056391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.056412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.056436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.056522 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.159927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.159988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.160005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.160029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.160046 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.163659 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/3.log" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.164690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/2.log" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.169853 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" exitCode=1 Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.169930 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.170004 4751 scope.go:117] "RemoveContainer" containerID="98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.175030 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:56:24 crc kubenswrapper[4751]: E1123 03:56:24.175331 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.192183 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.210209 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.242958 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:23Z\\\",\\\"message\\\":\\\"23 03:56:23.818988 6765 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819015 6765 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 03:56:23.819220 6765 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819563 6765 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.820161 6765 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:56:23.820180 6765 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:56:23.820227 6765 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:56:23.820235 6765 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:56:23.820255 6765 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:56:23.820268 6765 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:56:23.820281 6765 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:56:23.820303 6765 factory.go:656] Stopping watch factory\\\\nI1123 03:56:23.820317 6765 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.261692 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.264773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.264833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.264857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.264889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.264910 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.280908 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.314407 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.333129 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.353188 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.368895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.368995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.369020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.369046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.369066 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.372165 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.392444 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.411119 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.432484 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.449300 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.466969 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.472137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.472194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.472213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.472239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.472256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.486529 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.506118 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.521749 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.544166 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.580509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.580582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.580601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.580628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.580647 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.643663 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.643767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.643833 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.643714 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:24 crc kubenswrapper[4751]: E1123 03:56:24.643936 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:24 crc kubenswrapper[4751]: E1123 03:56:24.644028 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:24 crc kubenswrapper[4751]: E1123 03:56:24.644263 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:24 crc kubenswrapper[4751]: E1123 03:56:24.644472 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.662993 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.679484 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.683583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.683644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.683662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.683687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.683704 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.703055 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.720411 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.745036 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.765786 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.786499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.787024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.787517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.787855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.790119 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.790845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.810689 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.831378 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.851498 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.869134 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.893182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.893454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.893653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.893857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.894097 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.900375 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98480475c03932bfccd1fb63ac068147bc9da1f298e6496a2136252b8fa41399\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:55:54Z\\\",\\\"message\\\":\\\"twork-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI1123 03:55:54.702176 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-pfb45\\\\nI1123 03:55:54.702099 6414 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qft9h\\\\nI1123 03:55:54.702173 6414 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh\\\\nI1123 03:55:54.702204 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}\\\\nI1123 03:55:54.702207 6414 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1123 03:55:54.702212 6414 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:23Z\\\",\\\"message\\\":\\\"23 03:56:23.818988 6765 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819015 6765 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 03:56:23.819220 6765 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819563 6765 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.820161 6765 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:56:23.820180 6765 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:56:23.820227 6765 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:56:23.820235 6765 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:56:23.820255 6765 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:56:23.820268 6765 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:56:23.820281 6765 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:56:23.820303 6765 factory.go:656] Stopping watch factory\\\\nI1123 03:56:23.820317 6765 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:56:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.919213 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.938752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.973082 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.995181 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:24Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.997600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.997661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.997688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.997716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:24 crc kubenswrapper[4751]: I1123 03:56:24.997742 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:24Z","lastTransitionTime":"2025-11-23T03:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.016014 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.032188 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.100236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.100299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.100317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.100342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.100389 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.176458 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/3.log" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.181923 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:56:25 crc kubenswrapper[4751]: E1123 03:56:25.183715 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.203836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.204143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.204170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.204194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.204211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.204642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.220385 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.248552 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.275448 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.292445 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.306791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.306861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.306878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.306903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.306921 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.313671 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.330066 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.349931 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.360995 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.382490 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.401730 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.410093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.410122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.410163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.410179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.410228 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.418040 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.448061 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:23Z\\\",\\\"message\\\":\\\"23 03:56:23.818988 6765 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819015 6765 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 03:56:23.819220 6765 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819563 6765 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.820161 6765 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:56:23.820180 6765 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:56:23.820227 6765 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:56:23.820235 6765 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:56:23.820255 6765 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:56:23.820268 6765 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:56:23.820281 6765 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:56:23.820303 6765 factory.go:656] Stopping watch factory\\\\nI1123 03:56:23.820317 6765 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:56:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.464724 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.481465 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.512630 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.513685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.513768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.513786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.513837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.513856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.532822 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.553845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:25Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.616672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.616745 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.616763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.616788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.616810 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.719785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.719856 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.719872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.719895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.719912 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.823129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.823198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.823216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.823242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.823259 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.925801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.926140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.926270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.926493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:25 crc kubenswrapper[4751]: I1123 03:56:25.926637 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:25Z","lastTransitionTime":"2025-11-23T03:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.030464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.030525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.030547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.030574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.030591 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.133742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.134201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.134273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.134343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.134441 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.238085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.238183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.238200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.238224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.238241 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.340714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.340770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.340792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.340820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.340842 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.442828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.442868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.442878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.442892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.442901 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.545220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.545260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.545268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.545283 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.545292 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.644015 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.644122 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.644046 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:26 crc kubenswrapper[4751]: E1123 03:56:26.644225 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.644330 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:26 crc kubenswrapper[4751]: E1123 03:56:26.644472 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:26 crc kubenswrapper[4751]: E1123 03:56:26.644522 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:26 crc kubenswrapper[4751]: E1123 03:56:26.644669 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.648291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.648395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.648422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.648452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.648475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.755744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.755824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.755862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.755896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.755923 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.860033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.860102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.860124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.860150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.860167 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.962857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.962910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.962926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.962948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:26 crc kubenswrapper[4751]: I1123 03:56:26.962964 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:26Z","lastTransitionTime":"2025-11-23T03:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.065470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.065561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.065586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.065621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.065645 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.168865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.168956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.168979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.169006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.169023 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.272218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.272290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.272312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.272339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.272385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.376394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.376463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.376486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.376520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.376546 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.480065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.480126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.480143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.480167 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.480183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.583159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.583230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.583249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.583274 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.583292 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.687128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.687194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.687216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.687272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.687297 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.792216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.792280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.792304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.792381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.792457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.895506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.895633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.895650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.895678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.895695 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.998939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.998976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.998986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.999001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:27 crc kubenswrapper[4751]: I1123 03:56:27.999012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:27Z","lastTransitionTime":"2025-11-23T03:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.104611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.104665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.104686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.104709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.104725 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.207827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.207885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.207898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.207920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.207934 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.310659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.310716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.310733 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.310756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.310773 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.413037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.413114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.413137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.413166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.413188 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.515815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.515866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.515884 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.515908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.515924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.573179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.573395 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573428 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:32.573390115 +0000 UTC m=+148.767061504 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.573536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573556 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.573621 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.573682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573748 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573788 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573803 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573814 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573824 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:57:32.573786886 +0000 UTC m=+148.767458285 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573851 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573898 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573923 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.573872 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 03:57:32.573853447 +0000 UTC m=+148.767524816 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.574027 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 03:57:32.574002031 +0000 UTC m=+148.767673460 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.574056 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 03:57:32.574040292 +0000 UTC m=+148.767711781 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.619151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.619196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.619207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.619223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.619236 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.643755 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.643834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.643876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.643938 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.644060 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.644076 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.644175 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:28 crc kubenswrapper[4751]: E1123 03:56:28.644271 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.727011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.727100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.727144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.727177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.727199 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.831117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.831174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.831190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.831215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.831238 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.934503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.934561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.934579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.934603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:28 crc kubenswrapper[4751]: I1123 03:56:28.934621 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:28Z","lastTransitionTime":"2025-11-23T03:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.037689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.037756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.037780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.037807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.037826 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.142786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.142848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.142866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.142897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.142915 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.245990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.246037 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.246049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.246068 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.246082 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.349471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.349569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.349588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.349614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.349630 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.452383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.452434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.452444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.452457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.452469 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.555549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.555620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.555643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.555671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.555692 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.659024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.659071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.659082 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.659098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.659109 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.762017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.762086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.762100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.762117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.762128 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.865084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.865146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.865161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.865184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.865207 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.967709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.967784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.967808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.967835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:29 crc kubenswrapper[4751]: I1123 03:56:29.967856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:29Z","lastTransitionTime":"2025-11-23T03:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.070719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.070772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.070790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.070812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.070828 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.174723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.174771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.174786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.174816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.174829 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.278269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.278333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.278388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.278415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.278433 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.381035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.381090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.381102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.381119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.381132 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.483801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.483867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.483890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.483919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.483939 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.588132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.588182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.588200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.588221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.588239 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.644105 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.644148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:30 crc kubenswrapper[4751]: E1123 03:56:30.644266 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.644111 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.644386 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:30 crc kubenswrapper[4751]: E1123 03:56:30.644521 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:30 crc kubenswrapper[4751]: E1123 03:56:30.644644 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:30 crc kubenswrapper[4751]: E1123 03:56:30.644838 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.691488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.691544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.691561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.691584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.691602 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.802808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.802862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.802881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.802903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.802919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.905416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.905448 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.905457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.905469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:30 crc kubenswrapper[4751]: I1123 03:56:30.905478 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:30Z","lastTransitionTime":"2025-11-23T03:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.008137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.008199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.008218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.008245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.008264 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.111178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.111219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.111228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.111241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.111249 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.214058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.214132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.214157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.214181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.214201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.317588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.317747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.317779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.317868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.317906 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.421005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.421077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.421095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.421146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.421171 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.523788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.523860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.523877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.523900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.523919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.626845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.627197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.627435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.627604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.627739 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.730606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.730655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.730672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.730697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.730715 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.833334 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.833713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.833859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.834014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.834157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.936885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.936944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.936967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.936997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:31 crc kubenswrapper[4751]: I1123 03:56:31.937020 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:31Z","lastTransitionTime":"2025-11-23T03:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.040026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.040161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.040186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.040223 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.040262 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.144423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.144850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.144877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.144906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.144927 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.255330 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.255441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.255459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.255484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.255502 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.359016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.359084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.359101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.359123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.359140 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.461805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.461859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.461870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.461887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.461899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.564680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.564749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.564760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.564804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.564818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.643658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.643762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.643950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.643961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.644216 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.644322 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.644563 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.645115 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.667567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.667632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.667651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.667676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.667694 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.741769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.741851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.741885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.741915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.742145 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.763240 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.768458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.768500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.768517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.768538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.768557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.788523 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.793253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.793315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.793336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.793403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.793428 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.813074 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.817953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.818001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.818012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.818030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.818043 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.839004 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.844503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.844555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.844573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.844597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.844615 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.869758 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d131c98e-35d3-4a76-8a3a-23528d1e3523\\\",\\\"systemUUID\\\":\\\"c9a2725d-83da-40b9-a1a2-b2190ab58130\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:32Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:32 crc kubenswrapper[4751]: E1123 03:56:32.870110 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.871834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.871873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.871887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.871908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.871924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.975190 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.975261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.975280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.975306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:32 crc kubenswrapper[4751]: I1123 03:56:32.975323 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:32Z","lastTransitionTime":"2025-11-23T03:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.078040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.078117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.078140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.078170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.078191 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.181760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.181834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.181859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.181890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.181915 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.284731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.284782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.284794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.284813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.284824 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.388426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.388510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.388534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.388565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.388587 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.492094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.492163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.492180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.492214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.492232 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.595244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.595311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.595328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.595386 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.595405 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.698762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.698849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.698882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.698911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.698932 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.802004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.802114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.802139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.802174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.802192 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.906408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.906474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.906492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.906519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:33 crc kubenswrapper[4751]: I1123 03:56:33.906537 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:33Z","lastTransitionTime":"2025-11-23T03:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.009681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.009728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.009738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.009756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.009771 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.113126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.113180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.113193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.113213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.113226 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.217099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.217167 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.217185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.217209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.217226 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.320272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.320318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.320333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.320392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.320410 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.423878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.423932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.423948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.423969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.423986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.527618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.527675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.527697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.527723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.527742 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.629849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.629877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.629885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.629898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.629906 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.643580 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.643631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.643657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:34 crc kubenswrapper[4751]: E1123 03:56:34.643727 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.643594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:34 crc kubenswrapper[4751]: E1123 03:56:34.643968 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:34 crc kubenswrapper[4751]: E1123 03:56:34.644328 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:34 crc kubenswrapper[4751]: E1123 03:56:34.644511 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.657896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4dq7q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee318377-acb2-4f75-9414-02313f3824e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:16Z\\\",\\\"message\\\":\\\"2025-11-23T03:55:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a\\\\n2025-11-23T03:55:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_db107c89-5682-47af-9009-5d4a2819314a to /host/opt/cni/bin/\\\\n2025-11-23T03:55:31Z [verbose] multus-daemon started\\\\n2025-11-23T03:55:31Z [verbose] Readiness Indicator file check\\\\n2025-11-23T03:56:16Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:56:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9cfjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4dq7q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.675708 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d68fa63-1624-4518-83ec-41a9fab460f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e665830fee248723dcf69941106646a388818ac3cda143183ad967ecc417b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e258761ae948be21a17fff1820e973a0f06162574f11d9095896573ecf3c4c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea1f4d16ebc8e73abd33c129dbf38712b6b25dbc387ba86e0150f0d9e4d329c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf16503d141d3b5046fb0d7694314f64e5f90b4d67aa2c20e7c708c14da063f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.687598 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vwbwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c4656b0-22d1-4a81-9d5c-d48b0521e0be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0278405c751d45c53331995c0c02f4c1486e1d62c9ef2ee12c883e6aa5c0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpkrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vwbwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.707314 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6962cbb1f3d27bc7c8af6d4b11bc355690c6036c2a15f3d07bcea940c0c9a0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b3e0e61e2baaf1814d41c98ffc0fb901b6f298166d5b156868bb6fedf4781b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.720845 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qft9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9d40550-4dd0-4a06-8fb7-0e8ad74822c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af62ec3c317eb78f0466e7da2dfbfdad69191ba9ae925f3be19ef93cde3c6544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnncv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qft9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.734689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.734753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.734777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.734807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.734827 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.747592 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b70755e-47c0-464f-bcd9-a509700373ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbb914fd3410e6543032f242c45707bcc8677b747025f6f48b293bb6daac4005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0c9379158e2cc2b4f8437ca38b7278953148db6294ddb6a0ae9cda4adebcd86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ce0fa2262f84060bfa9352c17e973ca275bdcd1bd8f6b26c341ce7e1d8f398\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1876cc0a19a35705bc8022ea635f629b6f0c70461629afc84e17a7e1a48d4822\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e01ec01ddb8b490088a63976bafd93c986ba48e40c5f0615a62332a45d0b430c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://269c87c1baabe89c8467533253d13b930b9f042cb1b3b69c8d713c2ed09e9ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d00e31e86ca550ce335676ed8454dbceca07d9ea7c7575caaf92e47f348677a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hzbnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qxhsd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.764759 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81fe3605-5395-4a60-ba10-3a9bad078169\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8992g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c5nsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.781578 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12d32af5a6e7b412c1f0027d87fbfdd832c3a2f8630feb8140cfe596049f5901\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.799485 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.820594 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40ca61d3-e812-4e89-936c-6642b4e02c10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4126d07c588681cd1f070683107b5f226f13f64ce0b2829ba42984a621045772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e144e7da74e771931668be64bba9fe273dad498fed392bda7365d770f773f2a4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7c78e817e4024100ac481ba0e1f64c449b3f16c180e5989292c7207f888ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7358e31c0575196ce8b96dbccfb0c63860f384be2e58237da46c3be4ee267f60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59c7757fb5b34ec4a0cdee0f570d35e3bfd6a9dbd29dab7e2c4a364036df45d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 03:55:24.259042 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 03:55:24.259274 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 03:55:24.260445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1616112874/tls.crt::/tmp/serving-cert-1616112874/tls.key\\\\\\\"\\\\nI1123 03:55:25.109789 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 03:55:25.119902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 03:55:25.119924 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 03:55:25.119941 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 03:55:25.119947 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 03:55:25.126918 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 03:55:25.126969 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126976 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 03:55:25.126982 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 03:55:25.126987 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 03:55:25.126992 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 03:55:25.126996 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 03:55:25.127252 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 03:55:25.128599 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d6268fc65fc27f66d4c957b36252e32c268c9e307e1d08b19f726361d8403f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8ec80bda9731e3ef0ab0e8afec549e6f52eb8c9cb78f54526f77d27e4daf400\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837844 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.837984 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.857146 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ee8f2e503a03a6d2535de8f4f0b20406b81da9235f0608da130922f10f5dcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.874152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.888716 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06e1c062-27d7-4432-9f0e-db4e98f65b0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a288177e9b4996f8509cdaffb419d0b8bd0d390825be429517cb87c39c752a8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pffx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pfb45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.915039 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a97283a1-e673-4d60-889d-f0d483d72c37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T03:56:23Z\\\",\\\"message\\\":\\\"23 03:56:23.818988 6765 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819015 6765 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 03:56:23.819220 6765 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.819563 6765 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1123 03:56:23.820161 6765 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 03:56:23.820180 6765 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 03:56:23.820227 6765 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 03:56:23.820235 6765 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 03:56:23.820255 6765 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 03:56:23.820268 6765 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 03:56:23.820281 6765 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 03:56:23.820303 6765 factory.go:656] Stopping watch factory\\\\nI1123 03:56:23.820317 6765 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T03:56:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rshhs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nfjcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.929556 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ef538a-f241-4f80-9f24-e7160a3a2379\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cedd617831a97c750473c419d7a8a2352803943d13a32362ff312453af9f855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfd1bead662a4b5344a1651e7be7084376809e1d29937e29d1b83e84a5e69fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rcq7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7n2gh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.940373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.940408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.940423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.940445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.940460 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:34Z","lastTransitionTime":"2025-11-23T03:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.943402 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f78141ea-d1e3-4e84-a4ac-2e231bb69189\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faa0661f900a4581d86cf5064a70c66d9e611dfcaef62da31d1e2b9c2acdb3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e11bf403176d4b93a7e7c3251ce9fe00ed92be6b3f857b1fe39f7a9b6cd6605c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://090f896579fa0f0a686aaec7620b7231c28fd2ad0f91f2bd0a68ab6e8e9c3e62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80108f8c4758986b6e2b398bf710cb5e8178f32a8ac2f8ea62661dcf03491512\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:34 crc kubenswrapper[4751]: I1123 03:56:34.967898 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7a8f46f1-56c1-444b-aa00-a8f57b8db001\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T03:55:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://226bcc05a0c7f2d525eb91b91773fb07fa2827202ca011092e202be6dd77acec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276019e04bb1d41d35340dfb18794124517f686d90b6359b3765ddbe5aa1cc2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30e1ca7b9566e220dfc0df21d3646ddecb9d7381998d6acf88f6a78c2093f08b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2117b03669dee727a363c0b8f2ff82344d3e3cf52756d4cd55912643b1e50bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b8fee72bdb1d3bd9d1809392c7f3ce1880657619ab7b5c292c2c4ccd304ceda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T03:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c14d2c29a2503e6c5c293d4b7fdf012ed272278ab6230792f77527456cce7887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50f34d49f3dbee482a23a373cc018ad301b3da2b068532057d32fba857f3f9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf51054c254a7a02dc354dcedb346dba72163dc98aa553e961027189949f64b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T03:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T03:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T03:55:04Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T03:56:34Z is after 2025-08-24T17:21:41Z" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.043565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.043625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.043644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.043667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.043685 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.147193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.147254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.147278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.147325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.147392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.250863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.250946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.250970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.251002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.251026 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.354286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.354402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.354428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.354458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.354484 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.456761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.456836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.456860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.456894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.456918 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.559431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.559491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.559513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.559542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.559559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.662973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.663072 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.663100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.663129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.663152 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.767499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.767587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.767605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.767629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.767646 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.871014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.871077 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.871095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.871123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.871141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.974333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.974703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.974851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.975118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:35 crc kubenswrapper[4751]: I1123 03:56:35.975268 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:35Z","lastTransitionTime":"2025-11-23T03:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.079279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.079388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.079415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.079439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.079457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.184794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.184871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.184895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.184924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.184946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.287584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.287644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.287661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.287684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.287703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.390808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.390978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.391042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.391067 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.391085 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.494882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.494960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.494980 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.495008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.495025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.598428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.598482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.598502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.598529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.598546 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.643524 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.643574 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.643685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:36 crc kubenswrapper[4751]: E1123 03:56:36.643855 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.643906 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:36 crc kubenswrapper[4751]: E1123 03:56:36.644078 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:36 crc kubenswrapper[4751]: E1123 03:56:36.644492 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:36 crc kubenswrapper[4751]: E1123 03:56:36.644591 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.659793 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.702057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.702133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.702162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.702191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.702213 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.805102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.805161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.805179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.805202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.805221 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.907242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.907298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.907315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.907337 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:36 crc kubenswrapper[4751]: I1123 03:56:36.907385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:36Z","lastTransitionTime":"2025-11-23T03:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.009688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.009725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.009735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.009750 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.009761 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.113246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.113787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.113976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.114116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.114263 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.217805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.217872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.217896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.217919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.217938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.320543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.320591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.320614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.320642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.320662 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.426113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.426181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.426199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.426225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.426244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.529576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.529636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.529654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.529674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.529689 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.633227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.633295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.633319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.633381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.633408 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.735916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.735967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.736019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.736051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.736068 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.839697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.839762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.839784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.839809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.839826 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.942809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.942864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.942882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.942907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:37 crc kubenswrapper[4751]: I1123 03:56:37.942924 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:37Z","lastTransitionTime":"2025-11-23T03:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.045195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.045263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.045311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.045375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.045407 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.148110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.148171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.148195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.148227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.148249 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.251248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.251313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.251331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.251392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.251418 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.354443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.354516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.354539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.354569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.354592 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.457987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.458048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.458071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.458100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.458123 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.561631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.561698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.561716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.561740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.561759 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.643452 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.643583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.643607 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:38 crc kubenswrapper[4751]: E1123 03:56:38.643686 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.643748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:38 crc kubenswrapper[4751]: E1123 03:56:38.643942 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:38 crc kubenswrapper[4751]: E1123 03:56:38.644714 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:38 crc kubenswrapper[4751]: E1123 03:56:38.644863 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.645185 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:56:38 crc kubenswrapper[4751]: E1123 03:56:38.645544 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.664315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.664403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.664423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.664446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.664465 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.767828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.767895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.767914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.767937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.767957 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.871271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.871335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.871390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.871415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.871432 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.975032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.975120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.975137 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.975162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:38 crc kubenswrapper[4751]: I1123 03:56:38.975179 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:38Z","lastTransitionTime":"2025-11-23T03:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.078397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.078472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.078494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.078517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.078535 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.182605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.182685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.182709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.182744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.182765 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.285788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.285861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.285886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.285916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.285939 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.389413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.389475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.389493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.389517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.389534 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.492838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.492890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.492909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.492935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.492950 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.595734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.595783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.595802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.595825 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.595841 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.699464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.699530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.699548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.699572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.699589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.802771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.802824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.802840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.802864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.802882 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.906252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.906334 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.906388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.906420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:39 crc kubenswrapper[4751]: I1123 03:56:39.906487 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:39Z","lastTransitionTime":"2025-11-23T03:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.008816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.008876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.008896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.008922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.008938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.111857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.111915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.111932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.111958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.111975 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.220790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.220869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.220948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.221019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.221038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.323331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.323398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.323409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.323424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.323437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.426570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.426689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.426716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.426748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.426771 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.530162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.530222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.530243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.530273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.530292 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.633233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.633287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.633305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.633327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.633370 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.644728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.644917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.644947 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.644999 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:40 crc kubenswrapper[4751]: E1123 03:56:40.645145 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:40 crc kubenswrapper[4751]: E1123 03:56:40.645313 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:40 crc kubenswrapper[4751]: E1123 03:56:40.645432 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:40 crc kubenswrapper[4751]: E1123 03:56:40.645584 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.735984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.736047 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.736065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.736089 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.736112 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.838855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.838921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.838937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.838962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.838979 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.942084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.942158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.942180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.942206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:40 crc kubenswrapper[4751]: I1123 03:56:40.942227 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:40Z","lastTransitionTime":"2025-11-23T03:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.045522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.045585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.045605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.045713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.045732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.148703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.148782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.148800 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.148823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.148840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.252143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.252216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.252235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.252261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.252281 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.355229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.355291 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.355314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.355341 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.355388 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.458168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.458222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.458241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.458264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.458282 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.560952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.561000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.561016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.561063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.561081 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.664094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.664153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.664172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.664196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.664213 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.767920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.767982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.768002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.768026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.768044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.871172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.871259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.871286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.871315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.871332 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.974698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.974780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.974806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.974840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:41 crc kubenswrapper[4751]: I1123 03:56:41.974865 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:41Z","lastTransitionTime":"2025-11-23T03:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.078033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.078090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.078106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.078127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.078141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.181460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.181518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.181536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.181558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.181573 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.284600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.284664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.284680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.284705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.284722 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.387942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.388002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.388020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.388044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.388062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.491499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.491578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.491602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.491633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.491655 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.594793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.594863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.594883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.594908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.594930 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.643429 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.643502 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.643527 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:42 crc kubenswrapper[4751]: E1123 03:56:42.643621 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.643656 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:42 crc kubenswrapper[4751]: E1123 03:56:42.643796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:42 crc kubenswrapper[4751]: E1123 03:56:42.643998 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:42 crc kubenswrapper[4751]: E1123 03:56:42.644071 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.697985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.698040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.698063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.698091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.698115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.801519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.801579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.801599 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.801624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.801642 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.905062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.905125 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.905145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.905168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.905185 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.957161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.957218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.957239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.957265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 03:56:42 crc kubenswrapper[4751]: I1123 03:56:42.957285 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T03:56:42Z","lastTransitionTime":"2025-11-23T03:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.044451 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk"] Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.045170 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.048881 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.049419 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.049945 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.051074 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.112786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qft9h" podStartSLOduration=76.112768602 podStartE2EDuration="1m16.112768602s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.0896783 +0000 UTC m=+99.283349719" watchObservedRunningTime="2025-11-23 03:56:43.112768602 +0000 UTC m=+99.306439961" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.129547 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qxhsd" podStartSLOduration=75.129514726 podStartE2EDuration="1m15.129514726s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.113220534 +0000 UTC m=+99.306891913" watchObservedRunningTime="2025-11-23 03:56:43.129514726 +0000 UTC m=+99.323186125" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.138617 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.138686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.138744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f39e939-ea86-430c-8955-ac7f4b047462-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.138794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39e939-ea86-430c-8955-ac7f4b047462-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.138819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f39e939-ea86-430c-8955-ac7f4b047462-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.190264 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.190244925 podStartE2EDuration="7.190244925s" podCreationTimestamp="2025-11-23 03:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.189471784 +0000 UTC m=+99.383143183" watchObservedRunningTime="2025-11-23 03:56:43.190244925 +0000 UTC m=+99.383916284" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.212633 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.212618298 podStartE2EDuration="1m18.212618298s" podCreationTimestamp="2025-11-23 03:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.212588497 +0000 UTC m=+99.406259906" watchObservedRunningTime="2025-11-23 03:56:43.212618298 +0000 UTC m=+99.406289647" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240263 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240373 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240443 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f39e939-ea86-430c-8955-ac7f4b047462-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39e939-ea86-430c-8955-ac7f4b047462-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2f39e939-ea86-430c-8955-ac7f4b047462-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.240537 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f39e939-ea86-430c-8955-ac7f4b047462-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.241362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2f39e939-ea86-430c-8955-ac7f4b047462-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.254121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39e939-ea86-430c-8955-ac7f4b047462-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.267824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f39e939-ea86-430c-8955-ac7f4b047462-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xxqmk\" (UID: \"2f39e939-ea86-430c-8955-ac7f4b047462\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.330549 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podStartSLOduration=76.330533852 podStartE2EDuration="1m16.330533852s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.303487136 +0000 UTC m=+99.497158535" watchObservedRunningTime="2025-11-23 03:56:43.330533852 +0000 UTC m=+99.524205211" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.346782 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7n2gh" podStartSLOduration=75.346768512 podStartE2EDuration="1m15.346768512s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.346412013 +0000 UTC m=+99.540083372" watchObservedRunningTime="2025-11-23 03:56:43.346768512 +0000 UTC m=+99.540439871" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.360550 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.363334 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.363319221 podStartE2EDuration="46.363319221s" podCreationTimestamp="2025-11-23 03:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.363235439 +0000 UTC m=+99.556906808" watchObservedRunningTime="2025-11-23 03:56:43.363319221 +0000 UTC m=+99.556990580" Nov 23 03:56:43 crc kubenswrapper[4751]: W1123 03:56:43.380753 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f39e939_ea86_430c_8955_ac7f4b047462.slice/crio-683f7e7456cfa2cd4adb99f278d4fd3dddc5de7cc1b2d998470d308f59b3f077 WatchSource:0}: Error finding container 683f7e7456cfa2cd4adb99f278d4fd3dddc5de7cc1b2d998470d308f59b3f077: Status 404 returned error can't find the container with id 683f7e7456cfa2cd4adb99f278d4fd3dddc5de7cc1b2d998470d308f59b3f077 Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.434474 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.434452146 podStartE2EDuration="1m17.434452146s" podCreationTimestamp="2025-11-23 03:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.433731197 +0000 UTC m=+99.627402556" watchObservedRunningTime="2025-11-23 03:56:43.434452146 +0000 UTC m=+99.628123515" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.455004 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4dq7q" podStartSLOduration=75.454989 podStartE2EDuration="1m15.454989s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.454510267 +0000 UTC m=+99.648181646" watchObservedRunningTime="2025-11-23 03:56:43.454989 +0000 UTC m=+99.648660359" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.475736 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.475722089 podStartE2EDuration="1m19.475722089s" podCreationTimestamp="2025-11-23 03:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.475594886 +0000 UTC m=+99.669266255" watchObservedRunningTime="2025-11-23 03:56:43.475722089 +0000 UTC m=+99.669393448" Nov 23 03:56:43 crc kubenswrapper[4751]: I1123 03:56:43.485925 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vwbwq" podStartSLOduration=76.485910709 podStartE2EDuration="1m16.485910709s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:43.484878172 +0000 UTC m=+99.678549531" watchObservedRunningTime="2025-11-23 03:56:43.485910709 +0000 UTC m=+99.679582068" Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.257090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" event={"ID":"2f39e939-ea86-430c-8955-ac7f4b047462","Type":"ContainerStarted","Data":"a68c40248863d75a3977e21136abb6ddc55ebc50287950673baa99355039cbc5"} Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.258117 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" event={"ID":"2f39e939-ea86-430c-8955-ac7f4b047462","Type":"ContainerStarted","Data":"683f7e7456cfa2cd4adb99f278d4fd3dddc5de7cc1b2d998470d308f59b3f077"} Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.277004 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xxqmk" podStartSLOduration=76.276981962 podStartE2EDuration="1m16.276981962s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:56:44.276263433 +0000 UTC m=+100.469934822" watchObservedRunningTime="2025-11-23 03:56:44.276981962 +0000 UTC m=+100.470653361" Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.643419 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.645201 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.645290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:44 crc kubenswrapper[4751]: I1123 03:56:44.645312 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:44 crc kubenswrapper[4751]: E1123 03:56:44.645696 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:44 crc kubenswrapper[4751]: E1123 03:56:44.645836 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:44 crc kubenswrapper[4751]: E1123 03:56:44.645950 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:44 crc kubenswrapper[4751]: E1123 03:56:44.645979 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:46 crc kubenswrapper[4751]: I1123 03:56:46.644581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:46 crc kubenswrapper[4751]: I1123 03:56:46.644674 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:46 crc kubenswrapper[4751]: I1123 03:56:46.644693 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:46 crc kubenswrapper[4751]: I1123 03:56:46.644844 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.644847 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.645005 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.645149 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.645301 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:46 crc kubenswrapper[4751]: I1123 03:56:46.877656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.877853 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:56:46 crc kubenswrapper[4751]: E1123 03:56:46.877920 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs podName:81fe3605-5395-4a60-ba10-3a9bad078169 nodeName:}" failed. No retries permitted until 2025-11-23 03:57:50.877904241 +0000 UTC m=+167.071575600 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs") pod "network-metrics-daemon-c5nsl" (UID: "81fe3605-5395-4a60-ba10-3a9bad078169") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 03:56:48 crc kubenswrapper[4751]: I1123 03:56:48.643312 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:48 crc kubenswrapper[4751]: I1123 03:56:48.643376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:48 crc kubenswrapper[4751]: E1123 03:56:48.643633 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:48 crc kubenswrapper[4751]: E1123 03:56:48.643763 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:48 crc kubenswrapper[4751]: I1123 03:56:48.644132 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:48 crc kubenswrapper[4751]: E1123 03:56:48.644279 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:48 crc kubenswrapper[4751]: I1123 03:56:48.643330 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:48 crc kubenswrapper[4751]: E1123 03:56:48.644785 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:50 crc kubenswrapper[4751]: I1123 03:56:50.643290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:50 crc kubenswrapper[4751]: I1123 03:56:50.643324 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:50 crc kubenswrapper[4751]: I1123 03:56:50.643399 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:50 crc kubenswrapper[4751]: E1123 03:56:50.643442 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:50 crc kubenswrapper[4751]: I1123 03:56:50.643494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:50 crc kubenswrapper[4751]: E1123 03:56:50.643627 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:50 crc kubenswrapper[4751]: E1123 03:56:50.644230 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:50 crc kubenswrapper[4751]: I1123 03:56:50.644413 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:56:50 crc kubenswrapper[4751]: E1123 03:56:50.644567 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:56:50 crc kubenswrapper[4751]: E1123 03:56:50.644687 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:52 crc kubenswrapper[4751]: I1123 03:56:52.643840 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:52 crc kubenswrapper[4751]: I1123 03:56:52.643841 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:52 crc kubenswrapper[4751]: I1123 03:56:52.643987 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:52 crc kubenswrapper[4751]: I1123 03:56:52.644506 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:52 crc kubenswrapper[4751]: E1123 03:56:52.644651 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:52 crc kubenswrapper[4751]: E1123 03:56:52.644829 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:52 crc kubenswrapper[4751]: E1123 03:56:52.644978 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:52 crc kubenswrapper[4751]: E1123 03:56:52.645075 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:54 crc kubenswrapper[4751]: I1123 03:56:54.643186 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:54 crc kubenswrapper[4751]: E1123 03:56:54.645476 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:54 crc kubenswrapper[4751]: I1123 03:56:54.645840 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:54 crc kubenswrapper[4751]: I1123 03:56:54.645993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:54 crc kubenswrapper[4751]: E1123 03:56:54.646125 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:54 crc kubenswrapper[4751]: I1123 03:56:54.646023 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:54 crc kubenswrapper[4751]: E1123 03:56:54.646325 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:54 crc kubenswrapper[4751]: E1123 03:56:54.646577 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:56 crc kubenswrapper[4751]: I1123 03:56:56.643945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:56 crc kubenswrapper[4751]: I1123 03:56:56.644014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:56 crc kubenswrapper[4751]: I1123 03:56:56.644073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:56 crc kubenswrapper[4751]: I1123 03:56:56.644106 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:56 crc kubenswrapper[4751]: E1123 03:56:56.644799 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:56 crc kubenswrapper[4751]: E1123 03:56:56.644885 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:56 crc kubenswrapper[4751]: E1123 03:56:56.645000 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:56:56 crc kubenswrapper[4751]: E1123 03:56:56.645185 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:58 crc kubenswrapper[4751]: I1123 03:56:58.643478 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:56:58 crc kubenswrapper[4751]: E1123 03:56:58.643618 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:56:58 crc kubenswrapper[4751]: I1123 03:56:58.643696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:56:58 crc kubenswrapper[4751]: I1123 03:56:58.643743 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:56:58 crc kubenswrapper[4751]: I1123 03:56:58.643788 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:56:58 crc kubenswrapper[4751]: E1123 03:56:58.643907 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:56:58 crc kubenswrapper[4751]: E1123 03:56:58.644124 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:56:58 crc kubenswrapper[4751]: E1123 03:56:58.644249 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:00 crc kubenswrapper[4751]: I1123 03:57:00.643550 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:00 crc kubenswrapper[4751]: I1123 03:57:00.643659 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:00 crc kubenswrapper[4751]: I1123 03:57:00.643558 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:00 crc kubenswrapper[4751]: E1123 03:57:00.643726 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:00 crc kubenswrapper[4751]: I1123 03:57:00.643561 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:00 crc kubenswrapper[4751]: E1123 03:57:00.643863 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:00 crc kubenswrapper[4751]: E1123 03:57:00.643944 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:00 crc kubenswrapper[4751]: E1123 03:57:00.644024 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:02 crc kubenswrapper[4751]: I1123 03:57:02.643233 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:02 crc kubenswrapper[4751]: I1123 03:57:02.643336 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:02 crc kubenswrapper[4751]: I1123 03:57:02.643504 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:02 crc kubenswrapper[4751]: E1123 03:57:02.643512 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:02 crc kubenswrapper[4751]: I1123 03:57:02.643789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:02 crc kubenswrapper[4751]: E1123 03:57:02.643833 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:02 crc kubenswrapper[4751]: E1123 03:57:02.643722 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:02 crc kubenswrapper[4751]: E1123 03:57:02.643919 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.328075 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/1.log" Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.329001 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/0.log" Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.329076 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee318377-acb2-4f75-9414-02313f3824e0" containerID="226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d" exitCode=1 Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.329116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerDied","Data":"226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d"} Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.329174 4751 scope.go:117] "RemoveContainer" containerID="adaa727803a9d26d28b40fc5705de7029c8add541eadd0e9bf26c6bd2f1b782a" Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.329799 4751 scope.go:117] "RemoveContainer" containerID="226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d" Nov 23 03:57:03 crc kubenswrapper[4751]: E1123 03:57:03.330101 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4dq7q_openshift-multus(ee318377-acb2-4f75-9414-02313f3824e0)\"" pod="openshift-multus/multus-4dq7q" podUID="ee318377-acb2-4f75-9414-02313f3824e0" Nov 23 03:57:03 crc kubenswrapper[4751]: I1123 03:57:03.644879 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:57:03 crc kubenswrapper[4751]: E1123 03:57:03.645133 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nfjcv_openshift-ovn-kubernetes(a97283a1-e673-4d60-889d-f0d483d72c37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" Nov 23 03:57:04 crc kubenswrapper[4751]: I1123 03:57:04.339139 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/1.log" Nov 23 03:57:04 crc kubenswrapper[4751]: I1123 03:57:04.643396 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:04 crc kubenswrapper[4751]: I1123 03:57:04.643475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:04 crc kubenswrapper[4751]: I1123 03:57:04.643493 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:04 crc kubenswrapper[4751]: I1123 03:57:04.645499 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.645497 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.645645 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.645791 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.645873 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.663430 4751 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 23 03:57:04 crc kubenswrapper[4751]: E1123 03:57:04.741111 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 03:57:06 crc kubenswrapper[4751]: I1123 03:57:06.643464 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:06 crc kubenswrapper[4751]: I1123 03:57:06.643555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:06 crc kubenswrapper[4751]: I1123 03:57:06.643508 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:06 crc kubenswrapper[4751]: E1123 03:57:06.643649 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:06 crc kubenswrapper[4751]: I1123 03:57:06.643775 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:06 crc kubenswrapper[4751]: E1123 03:57:06.643779 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:06 crc kubenswrapper[4751]: E1123 03:57:06.643958 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:06 crc kubenswrapper[4751]: E1123 03:57:06.644141 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:08 crc kubenswrapper[4751]: I1123 03:57:08.644598 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:08 crc kubenswrapper[4751]: I1123 03:57:08.644644 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:08 crc kubenswrapper[4751]: I1123 03:57:08.644739 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:08 crc kubenswrapper[4751]: E1123 03:57:08.644852 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:08 crc kubenswrapper[4751]: E1123 03:57:08.645089 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:08 crc kubenswrapper[4751]: E1123 03:57:08.645323 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:08 crc kubenswrapper[4751]: I1123 03:57:08.648895 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:08 crc kubenswrapper[4751]: E1123 03:57:08.649855 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:09 crc kubenswrapper[4751]: E1123 03:57:09.743133 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 03:57:10 crc kubenswrapper[4751]: I1123 03:57:10.643031 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:10 crc kubenswrapper[4751]: I1123 03:57:10.643101 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:10 crc kubenswrapper[4751]: E1123 03:57:10.643162 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:10 crc kubenswrapper[4751]: I1123 03:57:10.643174 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:10 crc kubenswrapper[4751]: E1123 03:57:10.643304 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:10 crc kubenswrapper[4751]: I1123 03:57:10.643412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:10 crc kubenswrapper[4751]: E1123 03:57:10.643559 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:10 crc kubenswrapper[4751]: E1123 03:57:10.643709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:12 crc kubenswrapper[4751]: I1123 03:57:12.643886 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:12 crc kubenswrapper[4751]: I1123 03:57:12.643961 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:12 crc kubenswrapper[4751]: I1123 03:57:12.643968 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:12 crc kubenswrapper[4751]: I1123 03:57:12.644060 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:12 crc kubenswrapper[4751]: E1123 03:57:12.644066 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:12 crc kubenswrapper[4751]: E1123 03:57:12.644245 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:12 crc kubenswrapper[4751]: E1123 03:57:12.644308 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:12 crc kubenswrapper[4751]: E1123 03:57:12.644409 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:14 crc kubenswrapper[4751]: I1123 03:57:14.643761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:14 crc kubenswrapper[4751]: I1123 03:57:14.643807 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:14 crc kubenswrapper[4751]: I1123 03:57:14.643821 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:14 crc kubenswrapper[4751]: I1123 03:57:14.643969 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:14 crc kubenswrapper[4751]: E1123 03:57:14.644114 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:14 crc kubenswrapper[4751]: E1123 03:57:14.644435 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:14 crc kubenswrapper[4751]: E1123 03:57:14.644438 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:14 crc kubenswrapper[4751]: E1123 03:57:14.644611 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:14 crc kubenswrapper[4751]: E1123 03:57:14.743973 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 03:57:16 crc kubenswrapper[4751]: I1123 03:57:16.643774 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:16 crc kubenswrapper[4751]: I1123 03:57:16.643827 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:16 crc kubenswrapper[4751]: I1123 03:57:16.644096 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:16 crc kubenswrapper[4751]: E1123 03:57:16.644082 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:16 crc kubenswrapper[4751]: I1123 03:57:16.644177 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:16 crc kubenswrapper[4751]: E1123 03:57:16.644428 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:16 crc kubenswrapper[4751]: E1123 03:57:16.644565 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:16 crc kubenswrapper[4751]: E1123 03:57:16.644749 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:16 crc kubenswrapper[4751]: I1123 03:57:16.646041 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.406489 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/3.log" Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.410396 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerStarted","Data":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.410997 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.451769 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podStartSLOduration=109.45174958 podStartE2EDuration="1m49.45174958s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:17.450577978 +0000 UTC m=+133.644249347" watchObservedRunningTime="2025-11-23 03:57:17.45174958 +0000 UTC m=+133.645420959" Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.618805 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c5nsl"] Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.618956 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:17 crc kubenswrapper[4751]: E1123 03:57:17.619091 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:17 crc kubenswrapper[4751]: I1123 03:57:17.644432 4751 scope.go:117] "RemoveContainer" containerID="226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d" Nov 23 03:57:18 crc kubenswrapper[4751]: I1123 03:57:18.417525 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/1.log" Nov 23 03:57:18 crc kubenswrapper[4751]: I1123 03:57:18.417924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerStarted","Data":"4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff"} Nov 23 03:57:18 crc kubenswrapper[4751]: I1123 03:57:18.643766 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:18 crc kubenswrapper[4751]: E1123 03:57:18.643938 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:18 crc kubenswrapper[4751]: I1123 03:57:18.644303 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:18 crc kubenswrapper[4751]: E1123 03:57:18.644436 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:18 crc kubenswrapper[4751]: I1123 03:57:18.644652 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:18 crc kubenswrapper[4751]: E1123 03:57:18.644799 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:19 crc kubenswrapper[4751]: I1123 03:57:19.644033 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:19 crc kubenswrapper[4751]: E1123 03:57:19.644188 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:19 crc kubenswrapper[4751]: E1123 03:57:19.745410 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 03:57:20 crc kubenswrapper[4751]: I1123 03:57:20.643108 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:20 crc kubenswrapper[4751]: I1123 03:57:20.643166 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:20 crc kubenswrapper[4751]: I1123 03:57:20.643167 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:20 crc kubenswrapper[4751]: E1123 03:57:20.643277 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:20 crc kubenswrapper[4751]: E1123 03:57:20.643506 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:20 crc kubenswrapper[4751]: E1123 03:57:20.643782 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:21 crc kubenswrapper[4751]: I1123 03:57:21.643104 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:21 crc kubenswrapper[4751]: E1123 03:57:21.643250 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:22 crc kubenswrapper[4751]: I1123 03:57:22.643433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:22 crc kubenswrapper[4751]: I1123 03:57:22.643632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:22 crc kubenswrapper[4751]: I1123 03:57:22.643646 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:22 crc kubenswrapper[4751]: E1123 03:57:22.643796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:22 crc kubenswrapper[4751]: E1123 03:57:22.643991 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:22 crc kubenswrapper[4751]: E1123 03:57:22.644139 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:23 crc kubenswrapper[4751]: I1123 03:57:23.643905 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:23 crc kubenswrapper[4751]: E1123 03:57:23.644051 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c5nsl" podUID="81fe3605-5395-4a60-ba10-3a9bad078169" Nov 23 03:57:24 crc kubenswrapper[4751]: I1123 03:57:24.643441 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:24 crc kubenswrapper[4751]: I1123 03:57:24.643485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:24 crc kubenswrapper[4751]: I1123 03:57:24.645709 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:24 crc kubenswrapper[4751]: E1123 03:57:24.645724 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 03:57:24 crc kubenswrapper[4751]: E1123 03:57:24.645766 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 03:57:24 crc kubenswrapper[4751]: E1123 03:57:24.646577 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 03:57:25 crc kubenswrapper[4751]: I1123 03:57:25.643595 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:25 crc kubenswrapper[4751]: I1123 03:57:25.646240 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 03:57:25 crc kubenswrapper[4751]: I1123 03:57:25.646320 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.643626 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.643632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.643675 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.646210 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.646299 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.646502 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 03:57:26 crc kubenswrapper[4751]: I1123 03:57:26.647072 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.583034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.583197 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:32 crc kubenswrapper[4751]: E1123 03:57:32.583255 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:59:34.583211828 +0000 UTC m=+270.776883227 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.583333 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.583460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.583503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.584863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.592251 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.592653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.593944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.670295 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.684799 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 03:57:32 crc kubenswrapper[4751]: I1123 03:57:32.698034 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:33 crc kubenswrapper[4751]: W1123 03:57:33.036509 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-7976dd1e0487b8d87c518c2570f68fecb298b0b4881e9830f8962dc38257d1b1 WatchSource:0}: Error finding container 7976dd1e0487b8d87c518c2570f68fecb298b0b4881e9830f8962dc38257d1b1: Status 404 returned error can't find the container with id 7976dd1e0487b8d87c518c2570f68fecb298b0b4881e9830f8962dc38257d1b1 Nov 23 03:57:33 crc kubenswrapper[4751]: W1123 03:57:33.150958 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b21d13c5385ed6869eff3a241e4a451555dd134ccbc3abdc3df633e3d818cd1a WatchSource:0}: Error finding container b21d13c5385ed6869eff3a241e4a451555dd134ccbc3abdc3df633e3d818cd1a: Status 404 returned error can't find the container with id b21d13c5385ed6869eff3a241e4a451555dd134ccbc3abdc3df633e3d818cd1a Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.478054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8a8191b69870690009961962daafe7b0e402fe4b64335caa5d4e4fde43611fe9"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.478127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"345389a126e5383568d40a532c442ac4ab57815e85a63bd6c407849912ef3e25"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.480698 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0197230cf51de5580cf52df8c988d0126e38db39cd09a92f3747f33ca95354d0"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.480771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7976dd1e0487b8d87c518c2570f68fecb298b0b4881e9830f8962dc38257d1b1"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.484731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"addb4b332d0ddb9b2e6bbe54c8744c3ef005f24e6e5f526c1707c188f26df454"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.484778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b21d13c5385ed6869eff3a241e4a451555dd134ccbc3abdc3df633e3d818cd1a"} Nov 23 03:57:33 crc kubenswrapper[4751]: I1123 03:57:33.485078 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.047226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.101125 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.101748 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.104465 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.105069 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.105988 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-762nm"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.106334 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.106614 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.106728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.106953 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.107282 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.107867 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.108437 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.108877 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdnb9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.109658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.110229 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.110946 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.111124 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8kg8p"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.111900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.111947 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.112531 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.112831 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g5d4w"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.113481 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x8bwj"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.113582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.114105 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.126046 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.126094 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.128520 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.129795 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.130698 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.131273 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132016 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132395 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132459 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132684 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132921 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.132976 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.133093 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.133317 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.135471 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.135612 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.140495 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.141787 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.142163 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.142187 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.142779 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.143248 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.144236 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.144832 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.144846 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.146799 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.160735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.160939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161266 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161485 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161563 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161623 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161780 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161889 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.161999 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162116 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162228 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162361 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162596 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162701 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162803 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162969 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163079 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163215 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163315 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.162909 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163532 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163692 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163718 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163823 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163841 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.163905 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.164100 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.164206 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.164266 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.164330 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.164448 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.165039 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.165147 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.166972 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.167394 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.167462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.167937 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.168324 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.168411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.169173 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.170588 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.171340 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.173207 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-n7llf"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.173734 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.174009 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.175041 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zc84k"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.175933 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.176223 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.177493 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.178142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.178469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.180994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.181218 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.181391 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185390 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185625 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185676 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185683 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185758 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.185788 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.186122 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.186311 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.186483 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.186622 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.186760 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.187298 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.187363 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.187470 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.187683 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.187832 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.190620 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.194940 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.195200 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.195271 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.195509 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.195609 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.195879 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.196561 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-serving-cert\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198811 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-client\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198874 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-client\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-config\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wkwz\" (UniqueName: \"kubernetes.io/projected/91ece5d6-83e6-4293-bbe0-351c6c8e9516-kube-api-access-7wkwz\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198940 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45hx\" (UniqueName: \"kubernetes.io/projected/ae952398-26e2-4b90-8df3-5cb6ff9529e9-kube-api-access-s45hx\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198960 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-config\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.198980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkdcr\" (UniqueName: \"kubernetes.io/projected/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-kube-api-access-nkdcr\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199003 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-auth-proxy-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199022 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-encryption-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199079 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-policies\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c70b07c-26ce-44aa-adcd-076204b96148-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-trusted-ca\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199145 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199188 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlv9\" (UniqueName: \"kubernetes.io/projected/1c70b07c-26ce-44aa-adcd-076204b96148-kube-api-access-brlv9\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199229 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-serving-cert\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svmw\" (UniqueName: \"kubernetes.io/projected/9424c059-36e9-4ee6-9252-e23c1ef46f4d-kube-api-access-7svmw\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199311 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-images\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.199333 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c70b07c-26ce-44aa-adcd-076204b96148-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-encryption-config\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208581 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcqc5\" (UniqueName: \"kubernetes.io/projected/b65a94d8-c328-457e-ac66-f6d62f592d55-kube-api-access-jcqc5\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208601 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-node-pullsecrets\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208621 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-config\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.202576 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208882 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5f4m\" (UniqueName: \"kubernetes.io/projected/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-kube-api-access-j5f4m\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-dir\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208934 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hzd\" (UniqueName: \"kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit-dir\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8cb\" (UniqueName: \"kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.208998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-serving-cert\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209141 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-image-import-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209182 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5lk47"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209194 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b65a94d8-c328-457e-ac66-f6d62f592d55-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9424c059-36e9-4ee6-9252-e23c1ef46f4d-machine-approver-tls\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-serving-cert\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209537 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209627 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.209542 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.210199 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.210494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.210604 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.210796 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-npjlm"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.211171 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.221047 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.232224 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.233239 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.234603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.262787 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.263800 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.264266 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.264743 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.265971 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.266719 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.267079 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.267117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.267391 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.267584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.272677 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.273095 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.273497 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.273549 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.273695 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.273868 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.276865 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.277067 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.277108 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdnb9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.281407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.284144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.288384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.296646 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.298423 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g5d4w"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.300409 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8kg8p"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.302142 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8khnw"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.302928 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cg8zn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.303299 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.303677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.305055 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.306324 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.306616 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.307036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.308563 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-n7llf"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.309247 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.309815 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310668 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f21aa19-7efd-424d-9ce7-b735d8356d64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310706 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455415b4-ad3a-4984-ad2e-50cc10802f94-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-stats-auth\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310781 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlv9\" (UniqueName: \"kubernetes.io/projected/1c70b07c-26ce-44aa-adcd-076204b96148-kube-api-access-brlv9\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310822 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310841 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310862 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-service-ca-bundle\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-metrics-tls\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310941 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-serving-cert\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.310986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6d3230f-71b8-4238-94a0-51641c56dae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311011 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7svmw\" (UniqueName: \"kubernetes.io/projected/9424c059-36e9-4ee6-9252-e23c1ef46f4d-kube-api-access-7svmw\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-images\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311055 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311080 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311105 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c70b07c-26ce-44aa-adcd-076204b96148-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311130 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttjs\" (UniqueName: \"kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-encryption-config\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311177 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnbj\" (UniqueName: \"kubernetes.io/projected/d23032bc-8a84-4925-b4de-f2622d042320-kube-api-access-8rnbj\") pod \"downloads-7954f5f757-n7llf\" (UID: \"d23032bc-8a84-4925-b4de-f2622d042320\") " pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-client\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311332 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/455415b4-ad3a-4984-ad2e-50cc10802f94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcqc5\" (UniqueName: \"kubernetes.io/projected/b65a94d8-c328-457e-ac66-f6d62f592d55-kube-api-access-jcqc5\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f21aa19-7efd-424d-9ce7-b735d8356d64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6d3230f-71b8-4238-94a0-51641c56dae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-node-pullsecrets\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-config\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311579 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7x6\" (UniqueName: \"kubernetes.io/projected/455415b4-ad3a-4984-ad2e-50cc10802f94-kube-api-access-ql7x6\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5f4m\" (UniqueName: \"kubernetes.io/projected/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-kube-api-access-j5f4m\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-service-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-dir\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311774 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2dd\" (UniqueName: \"kubernetes.io/projected/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-kube-api-access-2c2dd\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hzd\" (UniqueName: \"kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311875 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit-dir\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjb4c\" (UniqueName: \"kubernetes.io/projected/1e98face-cbe3-455b-af70-2cd70f20f290-kube-api-access-kjb4c\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311929 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8cb\" (UniqueName: \"kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311956 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-serving-cert\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.311985 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312033 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm7l\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-kube-api-access-qwm7l\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-image-import-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b65a94d8-c328-457e-ac66-f6d62f592d55-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9424c059-36e9-4ee6-9252-e23c1ef46f4d-machine-approver-tls\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-serving-cert\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312333 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3841659-4757-4975-9d37-7d70ebea2dcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312376 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312399 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-config-volume\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312425 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312448 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-serving-cert\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-metrics-certs\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312694 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-client\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312718 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5pv\" (UniqueName: \"kubernetes.io/projected/602b1e45-48d9-4d06-981c-b9c45cb18c1b-kube-api-access-xj5pv\") pod \"migrator-59844c95c7-l4xns\" (UID: \"602b1e45-48d9-4d06-981c-b9c45cb18c1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312800 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-client\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lplsg\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-kube-api-access-lplsg\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-config\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wkwz\" (UniqueName: \"kubernetes.io/projected/91ece5d6-83e6-4293-bbe0-351c6c8e9516-kube-api-access-7wkwz\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312953 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.312978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45hx\" (UniqueName: \"kubernetes.io/projected/ae952398-26e2-4b90-8df3-5cb6ff9529e9-kube-api-access-s45hx\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-default-certificate\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-serving-cert\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313084 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-config\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313134 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkdcr\" (UniqueName: \"kubernetes.io/projected/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-kube-api-access-nkdcr\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313158 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsng\" (UniqueName: \"kubernetes.io/projected/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-kube-api-access-wzsng\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313181 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw62c\" (UniqueName: \"kubernetes.io/projected/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-kube-api-access-bw62c\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313211 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-auth-proxy-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313235 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313259 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313286 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-encryption-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-policies\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313380 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313408 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-config\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bdn\" (UniqueName: \"kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313470 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c70b07c-26ce-44aa-adcd-076204b96148-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-trusted-ca\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313523 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbbbw\" (UniqueName: \"kubernetes.io/projected/a3841659-4757-4975-9d37-7d70ebea2dcb-kube-api-access-wbbbw\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313686 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-node-pullsecrets\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-images\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.313922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.314460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b65a94d8-c328-457e-ac66-f6d62f592d55-config\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.314592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.315081 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.315104 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.315669 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.315757 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz452"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.316083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit-dir\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.316573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.317336 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.333310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-serving-cert\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.333753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.333818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-serving-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.333833 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.333880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x8bwj"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.334501 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-dir\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.346964 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-image-import-ca\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.347453 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-serving-cert\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.347806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-config\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-config\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348150 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348677 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-audit-policies\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348698 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-encryption-config\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.348773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-trusted-ca\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.349106 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b65a94d8-c328-457e-ac66-f6d62f592d55-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.349469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-auth-proxy-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.349571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.350198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.350561 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.351000 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.351150 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-audit\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.351237 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.351905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.352432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-service-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.353667 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae952398-26e2-4b90-8df3-5cb6ff9529e9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.354147 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-etcd-client\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.354672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9424c059-36e9-4ee6-9252-e23c1ef46f4d-config\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.355158 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.355392 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c70b07c-26ce-44aa-adcd-076204b96148-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.355942 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-client\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.355956 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.356495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-serving-cert\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.356543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/91ece5d6-83e6-4293-bbe0-351c6c8e9516-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.359141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c70b07c-26ce-44aa-adcd-076204b96148-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.360692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ae952398-26e2-4b90-8df3-5cb6ff9529e9-encryption-config\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.360935 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91ece5d6-83e6-4293-bbe0-351c6c8e9516-serving-cert\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.363387 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9424c059-36e9-4ee6-9252-e23c1ef46f4d-machine-approver-tls\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.363908 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.366962 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.368847 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.372714 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.372778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.376532 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.376551 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.377145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.378993 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.380572 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.380978 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.381545 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.382306 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfs9h"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.382809 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.383200 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.383631 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.384436 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.385398 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.386390 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.387399 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gbjkx"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.389186 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.389498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.391505 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4qdtn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.392683 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.393854 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.395117 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.396691 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8khnw"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.398416 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zc84k"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.399949 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.401374 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz452"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.403748 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.404827 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.405598 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.406721 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.408236 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.409933 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-npjlm"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.411092 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.411954 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.412964 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfs9h"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414280 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-client\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/455415b4-ad3a-4984-ad2e-50cc10802f94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414335 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f21aa19-7efd-424d-9ce7-b735d8356d64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414660 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6d3230f-71b8-4238-94a0-51641c56dae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414740 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7x6\" (UniqueName: \"kubernetes.io/projected/455415b4-ad3a-4984-ad2e-50cc10802f94-kube-api-access-ql7x6\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414790 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414808 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-service-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2dd\" (UniqueName: \"kubernetes.io/projected/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-kube-api-access-2c2dd\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjb4c\" (UniqueName: \"kubernetes.io/projected/1e98face-cbe3-455b-af70-2cd70f20f290-kube-api-access-kjb4c\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414866 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/455415b4-ad3a-4984-ad2e-50cc10802f94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414932 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm7l\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-kube-api-access-qwm7l\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.414990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3841659-4757-4975-9d37-7d70ebea2dcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415011 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-config-volume\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-metrics-certs\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415100 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lplsg\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-kube-api-access-lplsg\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415111 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415120 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj5pv\" (UniqueName: \"kubernetes.io/projected/602b1e45-48d9-4d06-981c-b9c45cb18c1b-kube-api-access-xj5pv\") pod \"migrator-59844c95c7-l4xns\" (UID: \"602b1e45-48d9-4d06-981c-b9c45cb18c1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415234 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415258 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-default-certificate\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415340 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-serving-cert\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzsng\" (UniqueName: \"kubernetes.io/projected/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-kube-api-access-wzsng\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415441 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw62c\" (UniqueName: \"kubernetes.io/projected/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-kube-api-access-bw62c\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415597 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-config\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbbbw\" (UniqueName: \"kubernetes.io/projected/a3841659-4757-4975-9d37-7d70ebea2dcb-kube-api-access-wbbbw\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bdn\" (UniqueName: \"kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415691 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455415b4-ad3a-4984-ad2e-50cc10802f94-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f21aa19-7efd-424d-9ce7-b735d8356d64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-stats-auth\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415807 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-service-ca-bundle\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415844 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-metrics-tls\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6d3230f-71b8-4238-94a0-51641c56dae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415893 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415904 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415958 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4qdtn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.415984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vttjs\" (UniqueName: \"kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.416003 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnbj\" (UniqueName: \"kubernetes.io/projected/d23032bc-8a84-4925-b4de-f2622d042320-kube-api-access-8rnbj\") pod \"downloads-7954f5f757-n7llf\" (UID: \"d23032bc-8a84-4925-b4de-f2622d042320\") " pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.416053 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cg8zn"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.416070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.416649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.417036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.417127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-service-ca\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.417165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gbjkx"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.417318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e98face-cbe3-455b-af70-2cd70f20f290-config\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.418425 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wd8t9"] Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.419002 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.419462 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.419940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.420234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-serving-cert\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.420633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.420688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.420823 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.420907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.421249 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e98face-cbe3-455b-af70-2cd70f20f290-etcd-client\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.421275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3841659-4757-4975-9d37-7d70ebea2dcb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.421542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.422070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.423185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/455415b4-ad3a-4984-ad2e-50cc10802f94-serving-cert\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.423434 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f21aa19-7efd-424d-9ce7-b735d8356d64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.423809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.423924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.424138 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.424236 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.424848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.425034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.426315 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.428134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-default-certificate\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.428999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.443606 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.449237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-stats-auth\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.463527 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.474707 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-metrics-certs\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.483922 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.487745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-service-ca-bundle\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.503773 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.524922 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.544563 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.550523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f21aa19-7efd-424d-9ce7-b735d8356d64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.564004 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.583713 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.604300 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.624699 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.643414 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.670653 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.681176 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e6d3230f-71b8-4238-94a0-51641c56dae7-trusted-ca\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.683278 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.703804 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.711034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6d3230f-71b8-4238-94a0-51641c56dae7-metrics-tls\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.724852 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.733624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-config-volume\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.744566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.764570 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.775012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-metrics-tls\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.784628 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.804416 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.844747 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.865092 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.885607 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.904740 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.925043 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.945007 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.964259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 03:57:34 crc kubenswrapper[4751]: I1123 03:57:34.984696 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.005416 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.025417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.045466 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.065065 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.084936 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.104567 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.124868 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.144236 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.164281 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.184704 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.205141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.224384 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.245004 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.264564 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.285241 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.305123 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.322083 4751 request.go:700] Waited for 1.018261348s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.324419 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.344491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.365011 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.385319 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.405144 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.424694 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.445246 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.464736 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.484313 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.520088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlv9\" (UniqueName: \"kubernetes.io/projected/1c70b07c-26ce-44aa-adcd-076204b96148-kube-api-access-brlv9\") pod \"openshift-apiserver-operator-796bbdcf4f-cdgq6\" (UID: \"1c70b07c-26ce-44aa-adcd-076204b96148\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.552742 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5f4m\" (UniqueName: \"kubernetes.io/projected/15b90fa4-76d8-48bc-ad54-bf2b92ab2349-kube-api-access-j5f4m\") pod \"console-operator-58897d9998-x8bwj\" (UID: \"15b90fa4-76d8-48bc-ad54-bf2b92ab2349\") " pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.572859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hzd\" (UniqueName: \"kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd\") pod \"route-controller-manager-6576b87f9c-zpvjg\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.590809 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.594447 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8cb\" (UniqueName: \"kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb\") pod \"controller-manager-879f6c89f-pvr4q\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.603953 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.623904 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.626735 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.654969 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.663885 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.664655 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.715786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkdcr\" (UniqueName: \"kubernetes.io/projected/153596a1-7f1b-4fee-bc0f-fad7a469b3dc-kube-api-access-nkdcr\") pod \"authentication-operator-69f744f599-g5d4w\" (UID: \"153596a1-7f1b-4fee-bc0f-fad7a469b3dc\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.725870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wkwz\" (UniqueName: \"kubernetes.io/projected/91ece5d6-83e6-4293-bbe0-351c6c8e9516-kube-api-access-7wkwz\") pod \"apiserver-7bbb656c7d-nt2zf\" (UID: \"91ece5d6-83e6-4293-bbe0-351c6c8e9516\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.739903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcqc5\" (UniqueName: \"kubernetes.io/projected/b65a94d8-c328-457e-ac66-f6d62f592d55-kube-api-access-jcqc5\") pod \"machine-api-operator-5694c8668f-8kg8p\" (UID: \"b65a94d8-c328-457e-ac66-f6d62f592d55\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.761413 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7svmw\" (UniqueName: \"kubernetes.io/projected/9424c059-36e9-4ee6-9252-e23c1ef46f4d-kube-api-access-7svmw\") pod \"machine-approver-56656f9798-762nm\" (UID: \"9424c059-36e9-4ee6-9252-e23c1ef46f4d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.765089 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.766689 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.778092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.785274 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.787422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.789980 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.799494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.804455 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.840685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45hx\" (UniqueName: \"kubernetes.io/projected/ae952398-26e2-4b90-8df3-5cb6ff9529e9-kube-api-access-s45hx\") pod \"apiserver-76f77b778f-vdnb9\" (UID: \"ae952398-26e2-4b90-8df3-5cb6ff9529e9\") " pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.844698 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.864691 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.880506 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6"] Nov 23 03:57:35 crc kubenswrapper[4751]: W1123 03:57:35.904704 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c70b07c_26ce_44aa_adcd_076204b96148.slice/crio-6a3d9dae0492d0d20e5c0d184928e5e17caf85be9272064ad1c6429537b12ca9 WatchSource:0}: Error finding container 6a3d9dae0492d0d20e5c0d184928e5e17caf85be9272064ad1c6429537b12ca9: Status 404 returned error can't find the container with id 6a3d9dae0492d0d20e5c0d184928e5e17caf85be9272064ad1c6429537b12ca9 Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.906836 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.916106 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.923294 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.945997 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.964326 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.983021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" Nov 23 03:57:35 crc kubenswrapper[4751]: I1123 03:57:35.988246 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.002670 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.004853 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.015947 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8kg8p"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.023659 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 03:57:36 crc kubenswrapper[4751]: W1123 03:57:36.029136 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65a94d8_c328_457e_ac66_f6d62f592d55.slice/crio-ab3f05187abf4e9c5c7e6bb1688d9f8d3a697066ba3e6ab72e5d99ba793976c8 WatchSource:0}: Error finding container ab3f05187abf4e9c5c7e6bb1688d9f8d3a697066ba3e6ab72e5d99ba793976c8: Status 404 returned error can't find the container with id ab3f05187abf4e9c5c7e6bb1688d9f8d3a697066ba3e6ab72e5d99ba793976c8 Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.037219 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.043266 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.063530 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.085279 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.103689 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.123928 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.143657 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.163763 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.175711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vdnb9"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.184707 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 03:57:36 crc kubenswrapper[4751]: W1123 03:57:36.185845 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae952398_26e2_4b90_8df3_5cb6ff9529e9.slice/crio-9e659ae87f7b790f35f8edc74c2b618da2cff5daa8dff8fade6714b1f4d686df WatchSource:0}: Error finding container 9e659ae87f7b790f35f8edc74c2b618da2cff5daa8dff8fade6714b1f4d686df: Status 404 returned error can't find the container with id 9e659ae87f7b790f35f8edc74c2b618da2cff5daa8dff8fade6714b1f4d686df Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.203480 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.224362 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.246121 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.263535 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-x8bwj"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.265070 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.269461 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.274377 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g5d4w"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.283559 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.320788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7x6\" (UniqueName: \"kubernetes.io/projected/455415b4-ad3a-4984-ad2e-50cc10802f94-kube-api-access-ql7x6\") pod \"openshift-config-operator-7777fb866f-7gtxn\" (UID: \"455415b4-ad3a-4984-ad2e-50cc10802f94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.322274 4751 request.go:700] Waited for 1.906995898s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.335989 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj5pv\" (UniqueName: \"kubernetes.io/projected/602b1e45-48d9-4d06-981c-b9c45cb18c1b-kube-api-access-xj5pv\") pod \"migrator-59844c95c7-l4xns\" (UID: \"602b1e45-48d9-4d06-981c-b9c45cb18c1b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.359259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.378568 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnbj\" (UniqueName: \"kubernetes.io/projected/d23032bc-8a84-4925-b4de-f2622d042320-kube-api-access-8rnbj\") pod \"downloads-7954f5f757-n7llf\" (UID: \"d23032bc-8a84-4925-b4de-f2622d042320\") " pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.398044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm7l\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-kube-api-access-qwm7l\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.431599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbbbw\" (UniqueName: \"kubernetes.io/projected/a3841659-4757-4975-9d37-7d70ebea2dcb-kube-api-access-wbbbw\") pod \"cluster-samples-operator-665b6dd947-tpvvr\" (UID: \"a3841659-4757-4975-9d37-7d70ebea2dcb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.438914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2dd\" (UniqueName: \"kubernetes.io/projected/4c11da5a-8eab-4e0a-a06c-0c38d7cd8596-kube-api-access-2c2dd\") pod \"dns-default-npjlm\" (UID: \"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596\") " pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.458467 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bdn\" (UniqueName: \"kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn\") pod \"console-f9d7485db-bbrhw\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.459662 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.481437 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjb4c\" (UniqueName: \"kubernetes.io/projected/1e98face-cbe3-455b-af70-2cd70f20f290-kube-api-access-kjb4c\") pod \"etcd-operator-b45778765-zc84k\" (UID: \"1e98face-cbe3-455b-af70-2cd70f20f290\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.484300 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.486812 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.493374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.495876 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" event={"ID":"1c70b07c-26ce-44aa-adcd-076204b96148","Type":"ContainerStarted","Data":"9744cde4f37bfcd7038e45d30689d420d3a1e959b28cf4603512c463ae806cb6"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.495951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" event={"ID":"1c70b07c-26ce-44aa-adcd-076204b96148","Type":"ContainerStarted","Data":"6a3d9dae0492d0d20e5c0d184928e5e17caf85be9272064ad1c6429537b12ca9"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.497988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" event={"ID":"b65a94d8-c328-457e-ac66-f6d62f592d55","Type":"ContainerStarted","Data":"915f7b94e49b5f03d3587698379b618ca3955dcb5145e8da0273d0d9042599ea"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.498016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" event={"ID":"b65a94d8-c328-457e-ac66-f6d62f592d55","Type":"ContainerStarted","Data":"75f9cade87ba958195168d45a5a7d170c74ebb6242b3bb40cc2d7002e73f309c"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.498027 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" event={"ID":"b65a94d8-c328-457e-ac66-f6d62f592d55","Type":"ContainerStarted","Data":"ab3f05187abf4e9c5c7e6bb1688d9f8d3a697066ba3e6ab72e5d99ba793976c8"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.499623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" event={"ID":"2790e1c8-65f5-42b1-afca-0f755fdd0f33","Type":"ContainerStarted","Data":"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.499654 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" event={"ID":"2790e1c8-65f5-42b1-afca-0f755fdd0f33","Type":"ContainerStarted","Data":"b35d341f3b52b4706736a0159ea64b1eb2588a4a4827a5f18f26769efd198ac4"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.501159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" event={"ID":"153596a1-7f1b-4fee-bc0f-fad7a469b3dc","Type":"ContainerStarted","Data":"a8d3617b4f04bd113809d784c74100f389c71412bf3d37f03283d826661d7950"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.501209 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" event={"ID":"153596a1-7f1b-4fee-bc0f-fad7a469b3dc","Type":"ContainerStarted","Data":"d4bd7975d8a0eb9af1bba2a47bac69829b4919f54c9bf583480b0fed35566c1e"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.502974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" event={"ID":"15b90fa4-76d8-48bc-ad54-bf2b92ab2349","Type":"ContainerStarted","Data":"040d4008dbc93e2fde7ea07024e42cbc5b6d99dad37dfa91ab02c891720fc8c6"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.503009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" event={"ID":"15b90fa4-76d8-48bc-ad54-bf2b92ab2349","Type":"ContainerStarted","Data":"d6350c8b1af27d44e39692604266e8d4500a293da8ea88a45f73894327565648"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.503780 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.504318 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.505293 4751 patch_prober.go:28] interesting pod/console-operator-58897d9998-x8bwj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.505331 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" podUID="15b90fa4-76d8-48bc-ad54-bf2b92ab2349" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.507224 4751 generic.go:334] "Generic (PLEG): container finished" podID="ae952398-26e2-4b90-8df3-5cb6ff9529e9" containerID="58c10d1dd27c08e4af35e7c1b95579a66b0b808e10207d96e17bd83aa08cea8f" exitCode=0 Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.507270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" event={"ID":"ae952398-26e2-4b90-8df3-5cb6ff9529e9","Type":"ContainerDied","Data":"58c10d1dd27c08e4af35e7c1b95579a66b0b808e10207d96e17bd83aa08cea8f"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.507568 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" event={"ID":"ae952398-26e2-4b90-8df3-5cb6ff9529e9","Type":"ContainerStarted","Data":"9e659ae87f7b790f35f8edc74c2b618da2cff5daa8dff8fade6714b1f4d686df"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.510315 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" event={"ID":"ee400172-faca-43a0-8331-fea8b31505db","Type":"ContainerStarted","Data":"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.510374 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" event={"ID":"ee400172-faca-43a0-8331-fea8b31505db","Type":"ContainerStarted","Data":"570a643902743aaefb89472fe25712eafd6091b3895548a6e4d7e6a1adede02e"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.511175 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.516478 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pvr4q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.517641 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" podUID="ee400172-faca-43a0-8331-fea8b31505db" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.517775 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" event={"ID":"9424c059-36e9-4ee6-9252-e23c1ef46f4d","Type":"ContainerStarted","Data":"8e515b36ad0d461215a7af704c63d51a546a05a16e23c7a9524ef363248bcab7"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.517802 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" event={"ID":"9424c059-36e9-4ee6-9252-e23c1ef46f4d","Type":"ContainerStarted","Data":"b58ec5ee076f53a44519dba954a5e8b430bc4f96b8ace6e719bc5f891f3a027b"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.517814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" event={"ID":"9424c059-36e9-4ee6-9252-e23c1ef46f4d","Type":"ContainerStarted","Data":"bfe739cd799618d4c7d67d0a9531f28585b57682c9bec5567fa4b756a82adca2"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.520175 4751 generic.go:334] "Generic (PLEG): container finished" podID="91ece5d6-83e6-4293-bbe0-351c6c8e9516" containerID="390be194200937fc99733dddcfc24f14eb71a8f100ff1486e218087ecf88a53f" exitCode=0 Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.520223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" event={"ID":"91ece5d6-83e6-4293-bbe0-351c6c8e9516","Type":"ContainerDied","Data":"390be194200937fc99733dddcfc24f14eb71a8f100ff1486e218087ecf88a53f"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.520253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" event={"ID":"91ece5d6-83e6-4293-bbe0-351c6c8e9516","Type":"ContainerStarted","Data":"c76acf43878b43c93acc86d2bc59fac6825c86e9971634220d2ae78de1aa64c8"} Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.525104 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.530586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.538117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.562051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzsng\" (UniqueName: \"kubernetes.io/projected/5a7bbd2e-2fd6-42d0-948d-3fe6d136e752-kube-api-access-wzsng\") pod \"router-default-5444994796-5lk47\" (UID: \"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752\") " pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.577870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw62c\" (UniqueName: \"kubernetes.io/projected/af9edb45-f2d2-41c5-b003-a5f2d2777cc8-kube-api-access-bw62c\") pod \"openshift-controller-manager-operator-756b6f6bc6-vrbtv\" (UID: \"af9edb45-f2d2-41c5-b003-a5f2d2777cc8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.602854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lplsg\" (UniqueName: \"kubernetes.io/projected/5f21aa19-7efd-424d-9ce7-b735d8356d64-kube-api-access-lplsg\") pod \"cluster-image-registry-operator-dc59b4c8b-pqqs9\" (UID: \"5f21aa19-7efd-424d-9ce7-b735d8356d64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.622724 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e6d3230f-71b8-4238-94a0-51641c56dae7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q7w24\" (UID: \"e6d3230f-71b8-4238-94a0-51641c56dae7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.639614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vttjs\" (UniqueName: \"kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs\") pod \"oauth-openshift-558db77b4-rtcqm\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652774 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652829 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652860 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzxh\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.652913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: E1123 03:57:36.653300 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.153282146 +0000 UTC m=+153.346953505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.707781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.719567 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.754278 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:36 crc kubenswrapper[4751]: E1123 03:57:36.754434 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.254413172 +0000 UTC m=+153.448084531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.754502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-srv-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.754547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.754564 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.755706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.755741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd84f54-df4c-4c6c-af61-4f92716457a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.755812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-cabundle\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-socket-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756172 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-certs\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756214 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6326297-0945-46f8-825e-96e057069445-serving-cert\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.756294 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757239 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzssx\" (UniqueName: \"kubernetes.io/projected/e8f77961-5ffa-496c-ac26-aac2c5461f14-kube-api-access-zzssx\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757367 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-node-bootstrap-token\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757443 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-key\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6326297-0945-46f8-825e-96e057069445-config\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-mountpoint-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757568 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a236822-dbca-49e0-af7f-417172a744e0-tmpfs\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757631 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/243f13ba-48c3-46bc-9fa1-d345fd0ea542-proxy-tls\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd84f54-df4c-4c6c-af61-4f92716457a4-config\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-srv-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-registration-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1680deb-dbd2-45ee-9556-8a478b70ea3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzxh\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.757977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758016 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758374 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48576b93-8342-4378-8e51-404c96d80d42-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77124263-db88-438c-9e26-34a934746c27-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758407 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtq8s\" (UniqueName: \"kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7428e18-93c0-4bc3-b774-93e08c88ff63-metrics-tls\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758468 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrds7\" (UniqueName: \"kubernetes.io/projected/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-kube-api-access-nrds7\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-images\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758794 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4mp\" (UniqueName: \"kubernetes.io/projected/c6326297-0945-46f8-825e-96e057069445-kube-api-access-md4mp\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758857 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmm2\" (UniqueName: \"kubernetes.io/projected/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-kube-api-access-pdmm2\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758892 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzf2c\" (UniqueName: \"kubernetes.io/projected/692f5f61-b926-46fa-aa1d-e156d093a7ca-kube-api-access-mzf2c\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxtd\" (UniqueName: \"kubernetes.io/projected/77124263-db88-438c-9e26-34a934746c27-kube-api-access-gtxtd\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.758931 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4bd\" (UniqueName: \"kubernetes.io/projected/8a236822-dbca-49e0-af7f-417172a744e0-kube-api-access-jt4bd\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.759187 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2l6t\" (UniqueName: \"kubernetes.io/projected/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-kube-api-access-d2l6t\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.761468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: E1123 03:57:36.765595 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.265577796 +0000 UTC m=+153.459249155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.766060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.767295 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.759239 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2fr8\" (UniqueName: \"kubernetes.io/projected/acadd328-b207-48a2-9041-00d517ed7c39-kube-api-access-x2fr8\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.767756 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.768381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d3ce6ac-9cf3-400d-99ce-3546ee615007-cert\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.768569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77124263-db88-438c-9e26-34a934746c27-proxy-tls\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.768991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-csi-data-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-kube-api-access-8r7n8\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769325 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-profile-collector-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-webhook-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8mzf\" (UniqueName: \"kubernetes.io/projected/b1680deb-dbd2-45ee-9556-8a478b70ea3d-kube-api-access-s8mzf\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769939 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48576b93-8342-4378-8e51-404c96d80d42-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.769999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjx55\" (UniqueName: \"kubernetes.io/projected/c7428e18-93c0-4bc3-b774-93e08c88ff63-kube-api-access-zjx55\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-n7llf"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770287 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770442 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dd84f54-df4c-4c6c-af61-4f92716457a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkk6\" (UniqueName: \"kubernetes.io/projected/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-kube-api-access-2xkk6\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-plugins-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48576b93-8342-4378-8e51-404c96d80d42-config\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bxk\" (UniqueName: \"kubernetes.io/projected/3d3ce6ac-9cf3-400d-99ce-3546ee615007-kube-api-access-96bxk\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.770808 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95kzn\" (UniqueName: \"kubernetes.io/projected/243f13ba-48c3-46bc-9fa1-d345fd0ea542-kube-api-access-95kzn\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.774680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.781670 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:36 crc kubenswrapper[4751]: W1123 03:57:36.799275 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd23032bc_8a84_4925_b4de_f2622d042320.slice/crio-29b0cd844b4b7508f53c11d2a4468b91ed33a8010eebb89e30599f52e90f5816 WatchSource:0}: Error finding container 29b0cd844b4b7508f53c11d2a4468b91ed33a8010eebb89e30599f52e90f5816: Status 404 returned error can't find the container with id 29b0cd844b4b7508f53c11d2a4468b91ed33a8010eebb89e30599f52e90f5816 Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.800624 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.809157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.809248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.816658 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns"] Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.826666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.829063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzxh\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.838750 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn"] Nov 23 03:57:36 crc kubenswrapper[4751]: W1123 03:57:36.864117 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod455415b4_ad3a_4984_ad2e_50cc10802f94.slice/crio-5792d26179014d52422690a66319e3e3e2a56bb351f91de59ea6de39f95dbb15 WatchSource:0}: Error finding container 5792d26179014d52422690a66319e3e3e2a56bb351f91de59ea6de39f95dbb15: Status 404 returned error can't find the container with id 5792d26179014d52422690a66319e3e3e2a56bb351f91de59ea6de39f95dbb15 Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872093 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-node-bootstrap-token\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872303 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-key\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872321 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6326297-0945-46f8-825e-96e057069445-config\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872337 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-mountpoint-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872365 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872381 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a236822-dbca-49e0-af7f-417172a744e0-tmpfs\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/243f13ba-48c3-46bc-9fa1-d345fd0ea542-proxy-tls\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd84f54-df4c-4c6c-af61-4f92716457a4-config\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-srv-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-registration-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872458 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1680deb-dbd2-45ee-9556-8a478b70ea3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872512 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48576b93-8342-4378-8e51-404c96d80d42-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77124263-db88-438c-9e26-34a934746c27-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtq8s\" (UniqueName: \"kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7428e18-93c0-4bc3-b774-93e08c88ff63-metrics-tls\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrds7\" (UniqueName: \"kubernetes.io/projected/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-kube-api-access-nrds7\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-images\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872694 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmm2\" (UniqueName: \"kubernetes.io/projected/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-kube-api-access-pdmm2\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872712 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4mp\" (UniqueName: \"kubernetes.io/projected/c6326297-0945-46f8-825e-96e057069445-kube-api-access-md4mp\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzf2c\" (UniqueName: \"kubernetes.io/projected/692f5f61-b926-46fa-aa1d-e156d093a7ca-kube-api-access-mzf2c\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtxtd\" (UniqueName: \"kubernetes.io/projected/77124263-db88-438c-9e26-34a934746c27-kube-api-access-gtxtd\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4bd\" (UniqueName: \"kubernetes.io/projected/8a236822-dbca-49e0-af7f-417172a744e0-kube-api-access-jt4bd\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872791 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2l6t\" (UniqueName: \"kubernetes.io/projected/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-kube-api-access-d2l6t\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2fr8\" (UniqueName: \"kubernetes.io/projected/acadd328-b207-48a2-9041-00d517ed7c39-kube-api-access-x2fr8\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872840 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d3ce6ac-9cf3-400d-99ce-3546ee615007-cert\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77124263-db88-438c-9e26-34a934746c27-proxy-tls\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-csi-data-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-kube-api-access-8r7n8\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872901 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-profile-collector-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872930 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-webhook-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8mzf\" (UniqueName: \"kubernetes.io/projected/b1680deb-dbd2-45ee-9556-8a478b70ea3d-kube-api-access-s8mzf\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872978 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48576b93-8342-4378-8e51-404c96d80d42-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.872995 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjx55\" (UniqueName: \"kubernetes.io/projected/c7428e18-93c0-4bc3-b774-93e08c88ff63-kube-api-access-zjx55\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dd84f54-df4c-4c6c-af61-4f92716457a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkk6\" (UniqueName: \"kubernetes.io/projected/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-kube-api-access-2xkk6\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873047 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-plugins-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873061 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48576b93-8342-4378-8e51-404c96d80d42-config\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873092 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96bxk\" (UniqueName: \"kubernetes.io/projected/3d3ce6ac-9cf3-400d-99ce-3546ee615007-kube-api-access-96bxk\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95kzn\" (UniqueName: \"kubernetes.io/projected/243f13ba-48c3-46bc-9fa1-d345fd0ea542-kube-api-access-95kzn\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-srv-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873140 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd84f54-df4c-4c6c-af61-4f92716457a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873173 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-cabundle\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-socket-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: E1123 03:57:36.873244 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.373206488 +0000 UTC m=+153.566877857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873258 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-mountpoint-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-certs\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6326297-0945-46f8-825e-96e057069445-serving-cert\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzssx\" (UniqueName: \"kubernetes.io/projected/e8f77961-5ffa-496c-ac26-aac2c5461f14-kube-api-access-zzssx\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.873955 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6326297-0945-46f8-825e-96e057069445-config\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.874077 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.874136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-plugins-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.874693 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48576b93-8342-4378-8e51-404c96d80d42-config\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.876458 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.882044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-images\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.882187 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd84f54-df4c-4c6c-af61-4f92716457a4-config\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.888388 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/77124263-db88-438c-9e26-34a934746c27-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.889117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.889996 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-csi-data-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.890018 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-cabundle\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.897965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a236822-dbca-49e0-af7f-417172a744e0-tmpfs\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.900966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.901282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.901568 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-socket-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.908900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.914182 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8f77961-5ffa-496c-ac26-aac2c5461f14-signing-key\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.916640 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.916701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-registration-dir\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.921500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/243f13ba-48c3-46bc-9fa1-d345fd0ea542-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.925776 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd84f54-df4c-4c6c-af61-4f92716457a4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.925785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/243f13ba-48c3-46bc-9fa1-d345fd0ea542-proxy-tls\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.926298 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-certs\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.926431 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.926995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-srv-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.929188 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d3ce6ac-9cf3-400d-99ce-3546ee615007-cert\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.929958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/acadd328-b207-48a2-9041-00d517ed7c39-profile-collector-cert\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.931759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a236822-dbca-49e0-af7f-417172a744e0-webhook-cert\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.937604 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48576b93-8342-4378-8e51-404c96d80d42-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.940642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.940741 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6326297-0945-46f8-825e-96e057069445-serving-cert\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.940966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7428e18-93c0-4bc3-b774-93e08c88ff63-metrics-tls\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.941188 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzssx\" (UniqueName: \"kubernetes.io/projected/e8f77961-5ffa-496c-ac26-aac2c5461f14-kube-api-access-zzssx\") pod \"service-ca-9c57cc56f-vfs9h\" (UID: \"e8f77961-5ffa-496c-ac26-aac2c5461f14\") " pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.941826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/77124263-db88-438c-9e26-34a934746c27-proxy-tls\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.942555 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-srv-cert\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.945163 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.946395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/692f5f61-b926-46fa-aa1d-e156d093a7ca-node-bootstrap-token\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.947701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.950802 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.950848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.950887 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1680deb-dbd2-45ee-9556-8a478b70ea3d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.951021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzf2c\" (UniqueName: \"kubernetes.io/projected/692f5f61-b926-46fa-aa1d-e156d093a7ca-kube-api-access-mzf2c\") pod \"machine-config-server-wd8t9\" (UID: \"692f5f61-b926-46fa-aa1d-e156d093a7ca\") " pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.974496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:36 crc kubenswrapper[4751]: E1123 03:57:36.974861 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.474849288 +0000 UTC m=+153.668520647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.976465 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtxtd\" (UniqueName: \"kubernetes.io/projected/77124263-db88-438c-9e26-34a934746c27-kube-api-access-gtxtd\") pod \"machine-config-controller-84d6567774-k7zwz\" (UID: \"77124263-db88-438c-9e26-34a934746c27\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.988491 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4bd\" (UniqueName: \"kubernetes.io/projected/8a236822-dbca-49e0-af7f-417172a744e0-kube-api-access-jt4bd\") pod \"packageserver-d55dfcdfc-k8pfs\" (UID: \"8a236822-dbca-49e0-af7f-417172a744e0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:36 crc kubenswrapper[4751]: I1123 03:57:36.990457 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wd8t9" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.014816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2l6t\" (UniqueName: \"kubernetes.io/projected/1d0ec04b-8c97-487a-a899-9b1a20e6a96a-kube-api-access-d2l6t\") pod \"package-server-manager-789f6589d5-zs6nk\" (UID: \"1d0ec04b-8c97-487a-a899-9b1a20e6a96a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.019957 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zc84k"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.022438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2fr8\" (UniqueName: \"kubernetes.io/projected/acadd328-b207-48a2-9041-00d517ed7c39-kube-api-access-x2fr8\") pod \"catalog-operator-68c6474976-d7vbk\" (UID: \"acadd328-b207-48a2-9041-00d517ed7c39\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.050614 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-npjlm"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.057125 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48576b93-8342-4378-8e51-404c96d80d42-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2xnr\" (UID: \"48576b93-8342-4378-8e51-404c96d80d42\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.060833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4mp\" (UniqueName: \"kubernetes.io/projected/c6326297-0945-46f8-825e-96e057069445-kube-api-access-md4mp\") pod \"service-ca-operator-777779d784-r2sbr\" (UID: \"c6326297-0945-46f8-825e-96e057069445\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:37 crc kubenswrapper[4751]: W1123 03:57:37.074601 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c11da5a_8eab_4e0a_a06c_0c38d7cd8596.slice/crio-7d7ff13d241c2c958a30c1dc1eb2f364658e152712512074c4ca9ac0319da15d WatchSource:0}: Error finding container 7d7ff13d241c2c958a30c1dc1eb2f364658e152712512074c4ca9ac0319da15d: Status 404 returned error can't find the container with id 7d7ff13d241c2c958a30c1dc1eb2f364658e152712512074c4ca9ac0319da15d Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.075012 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.075578 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.575562442 +0000 UTC m=+153.769233801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.087774 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/b0b3ac60-91ee-4571-8256-69d2ff1f53bd-kube-api-access-8r7n8\") pod \"olm-operator-6b444d44fb-7s6fp\" (UID: \"b0b3ac60-91ee-4571-8256-69d2ff1f53bd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.109243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96bxk\" (UniqueName: \"kubernetes.io/projected/3d3ce6ac-9cf3-400d-99ce-3546ee615007-kube-api-access-96bxk\") pod \"ingress-canary-gbjkx\" (UID: \"3d3ce6ac-9cf3-400d-99ce-3546ee615007\") " pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.128868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95kzn\" (UniqueName: \"kubernetes.io/projected/243f13ba-48c3-46bc-9fa1-d345fd0ea542-kube-api-access-95kzn\") pod \"machine-config-operator-74547568cd-sz452\" (UID: \"243f13ba-48c3-46bc-9fa1-d345fd0ea542\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.143014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.148264 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrds7\" (UniqueName: \"kubernetes.io/projected/2ff8f82b-9489-4ffb-8e5c-40b20a0c6877-kube-api-access-nrds7\") pod \"csi-hostpathplugin-4qdtn\" (UID: \"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877\") " pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.166378 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f029c90-06f4-4a4e-8f4a-8620a6455c5a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4mkpq\" (UID: \"4f029c90-06f4-4a4e-8f4a-8620a6455c5a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.172675 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.176550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.177059 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.677044988 +0000 UTC m=+153.870716347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.199145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.201544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s\") pod \"collect-profiles-29397825-txtpz\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.209951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8mzf\" (UniqueName: \"kubernetes.io/projected/b1680deb-dbd2-45ee-9556-8a478b70ea3d-kube-api-access-s8mzf\") pod \"multus-admission-controller-857f4d67dd-cg8zn\" (UID: \"b1680deb-dbd2-45ee-9556-8a478b70ea3d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.211968 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.218151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.222728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dd84f54-df4c-4c6c-af61-4f92716457a4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6q5kx\" (UID: \"1dd84f54-df4c-4c6c-af61-4f92716457a4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.226265 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.231782 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.237904 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.239097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjx55\" (UniqueName: \"kubernetes.io/projected/c7428e18-93c0-4bc3-b774-93e08c88ff63-kube-api-access-zjx55\") pod \"dns-operator-744455d44c-8khnw\" (UID: \"c7428e18-93c0-4bc3-b774-93e08c88ff63\") " pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.244085 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.264156 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gbjkx" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.271194 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkk6\" (UniqueName: \"kubernetes.io/projected/fa7612e7-e0b7-4b66-a948-fc5bc3aa3033-kube-api-access-2xkk6\") pod \"control-plane-machine-set-operator-78cbb6b69f-cbnj9\" (UID: \"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.278542 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.278941 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.778925354 +0000 UTC m=+153.972596713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.282540 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.283764 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.288965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtq8s\" (UniqueName: \"kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s\") pod \"marketplace-operator-79b997595-rfmp4\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.297974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmm2\" (UniqueName: \"kubernetes.io/projected/fdaf1dc7-872b-4fe6-adeb-83b381d1754e-kube-api-access-pdmm2\") pod \"kube-storage-version-migrator-operator-b67b599dd-pvgv8\" (UID: \"fdaf1dc7-872b-4fe6-adeb-83b381d1754e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.380858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.388953 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.888936091 +0000 UTC m=+154.082607450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.452380 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.456977 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.463062 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.476956 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.477668 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.482036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.482231 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:37.982203683 +0000 UTC m=+154.175875042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.489494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.493402 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.506334 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.565999 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.574165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vfs9h"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.574224 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" event={"ID":"1e98face-cbe3-455b-af70-2cd70f20f290","Type":"ContainerStarted","Data":"cffdc4b92567f43cddd1322c0e0e4e53f9671bc44503e9efbd88bfd7b21636e2"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.585969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.586307 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.08629533 +0000 UTC m=+154.279966689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.596985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5lk47" event={"ID":"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752","Type":"ContainerStarted","Data":"03229de846dbf77eb612d2ec4b3ec0b3270a6b66f354ec3817c019d197709a00"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.597031 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5lk47" event={"ID":"5a7bbd2e-2fd6-42d0-948d-3fe6d136e752","Type":"ContainerStarted","Data":"360ec8aa2dccaccb45e1cfc5c048994bb78496b21157306a33fb091e0ad12478"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.654528 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.654850 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.654862 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.672060 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-n7llf" event={"ID":"d23032bc-8a84-4925-b4de-f2622d042320","Type":"ContainerStarted","Data":"de37fac98b83b84066e6682c2806fd77964cb0d57fc6e1c8e372fb7f4bdee77a"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.672104 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-n7llf" event={"ID":"d23032bc-8a84-4925-b4de-f2622d042320","Type":"ContainerStarted","Data":"29b0cd844b4b7508f53c11d2a4468b91ed33a8010eebb89e30599f52e90f5816"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.672973 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.688392 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.689471 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.18945147 +0000 UTC m=+154.383122829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.710375 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-n7llf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.710419 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n7llf" podUID="d23032bc-8a84-4925-b4de-f2622d042320" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 03:57:37 crc kubenswrapper[4751]: W1123 03:57:37.720659 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4baedc4d_15a1_49d0_b82f_a57fce419702.slice/crio-d726835aeb97691c5993b1153d2436ee4ce21073426df601aac1a9e4895d6c7f WatchSource:0}: Error finding container d726835aeb97691c5993b1153d2436ee4ce21073426df601aac1a9e4895d6c7f: Status 404 returned error can't find the container with id d726835aeb97691c5993b1153d2436ee4ce21073426df601aac1a9e4895d6c7f Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.722390 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.727177 4751 generic.go:334] "Generic (PLEG): container finished" podID="455415b4-ad3a-4984-ad2e-50cc10802f94" containerID="8b23652f26ee1e6a1b04cac61ed34b38c148a41e7774ce1547519a3008d15e95" exitCode=0 Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.727259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" event={"ID":"455415b4-ad3a-4984-ad2e-50cc10802f94","Type":"ContainerDied","Data":"8b23652f26ee1e6a1b04cac61ed34b38c148a41e7774ce1547519a3008d15e95"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.727278 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" event={"ID":"455415b4-ad3a-4984-ad2e-50cc10802f94","Type":"ContainerStarted","Data":"5792d26179014d52422690a66319e3e3e2a56bb351f91de59ea6de39f95dbb15"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.792009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" event={"ID":"602b1e45-48d9-4d06-981c-b9c45cb18c1b","Type":"ContainerStarted","Data":"234b2ee2cbff237441d8957eb517f54b4fbd0c058ad412dfc5411b0bc369dae1"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.792228 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" event={"ID":"602b1e45-48d9-4d06-981c-b9c45cb18c1b","Type":"ContainerStarted","Data":"3b82bc2922a95b2dd1b75b81860a1acc84a1c6fb14d60ebbabf20c7a68f6057b"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.792559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.793681 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.29366625 +0000 UTC m=+154.487337609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.809223 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz"] Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.809754 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.816741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" event={"ID":"ae952398-26e2-4b90-8df3-5cb6ff9529e9","Type":"ContainerStarted","Data":"2930ca9814a25ae32a3bf9adc249e48828c3431ad2165df43b75e21badabb8ce"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.816772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" event={"ID":"ae952398-26e2-4b90-8df3-5cb6ff9529e9","Type":"ContainerStarted","Data":"4695b33d19e73c5c5b9a4944302ea1b9bbb37913a9f3ef21ef1073921e85305f"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.836920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" event={"ID":"af9edb45-f2d2-41c5-b003-a5f2d2777cc8","Type":"ContainerStarted","Data":"91ae1f11bed4bac64401951e919b9d5ab1c3d65d2c03895d4060cdcce98b1609"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.856541 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wd8t9" event={"ID":"692f5f61-b926-46fa-aa1d-e156d093a7ca","Type":"ContainerStarted","Data":"ed65939c7bcd2bf03a52870b97cdaf77f96f848837855b62263ada15ab612e92"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.856595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wd8t9" event={"ID":"692f5f61-b926-46fa-aa1d-e156d093a7ca","Type":"ContainerStarted","Data":"323303dbe1fc5478b77fe7b99232eae2cf70b562970a21167eb5eb9886f09d06"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.871381 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" event={"ID":"91ece5d6-83e6-4293-bbe0-351c6c8e9516","Type":"ContainerStarted","Data":"185b42c1cedd56cdc3f6b05016d192bc0de987ebe5e765b0d3ea065a850e9764"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.894916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.896478 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.396460831 +0000 UTC m=+154.590132190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.902543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npjlm" event={"ID":"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596","Type":"ContainerStarted","Data":"7d7ff13d241c2c958a30c1dc1eb2f364658e152712512074c4ca9ac0319da15d"} Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.903905 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.910788 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-762nm" podStartSLOduration=130.91077627 podStartE2EDuration="2m10.91077627s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:37.871787221 +0000 UTC m=+154.065458580" watchObservedRunningTime="2025-11-23 03:57:37.91077627 +0000 UTC m=+154.104447629" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.918006 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.918063 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.933054 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.948327 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 03:57:37 crc kubenswrapper[4751]: I1123 03:57:37.996857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:37 crc kubenswrapper[4751]: E1123 03:57:37.997108 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.497097844 +0000 UTC m=+154.690769203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.106127 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.106718 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.60670199 +0000 UTC m=+154.800373339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.114973 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.115019 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.117044 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-g5d4w" podStartSLOduration=131.11702225 podStartE2EDuration="2m11.11702225s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.055914781 +0000 UTC m=+154.249586140" watchObservedRunningTime="2025-11-23 03:57:38.11702225 +0000 UTC m=+154.310693609" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.140443 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" podStartSLOduration=130.140425195 podStartE2EDuration="2m10.140425195s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.138872813 +0000 UTC m=+154.332544172" watchObservedRunningTime="2025-11-23 03:57:38.140425195 +0000 UTC m=+154.334096554" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.208015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.208416 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.708399311 +0000 UTC m=+154.902070670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.312203 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.312857 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.812840777 +0000 UTC m=+155.006512126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.417081 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.417399 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:38.917388805 +0000 UTC m=+155.111060154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.467980 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5lk47" podStartSLOduration=130.467813665 podStartE2EDuration="2m10.467813665s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.466607042 +0000 UTC m=+154.660278401" watchObservedRunningTime="2025-11-23 03:57:38.467813665 +0000 UTC m=+154.661485024" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.518884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.519245 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.019229661 +0000 UTC m=+155.212901020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.534608 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cdgq6" podStartSLOduration=131.534593218 podStartE2EDuration="2m11.534593218s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.534452064 +0000 UTC m=+154.728123423" watchObservedRunningTime="2025-11-23 03:57:38.534593218 +0000 UTC m=+154.728264577" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.622224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.622516 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.122504725 +0000 UTC m=+155.316176084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.662516 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" podStartSLOduration=130.662499391 podStartE2EDuration="2m10.662499391s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.644338287 +0000 UTC m=+154.838009646" watchObservedRunningTime="2025-11-23 03:57:38.662499391 +0000 UTC m=+154.856170750" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.724829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.724906 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.224892305 +0000 UTC m=+155.418563664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.725028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.725308 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.225301636 +0000 UTC m=+155.418972995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.827392 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.828195 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.328179789 +0000 UTC m=+155.521851148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.895935 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.900298 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-n7llf" podStartSLOduration=130.900277267 podStartE2EDuration="2m10.900277267s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:38.898472268 +0000 UTC m=+155.092143627" watchObservedRunningTime="2025-11-23 03:57:38.900277267 +0000 UTC m=+155.093948626" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.903448 4751 patch_prober.go:28] interesting pod/console-operator-58897d9998-x8bwj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.903500 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" podUID="15b90fa4-76d8-48bc-ad54-bf2b92ab2349" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.933339 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:38 crc kubenswrapper[4751]: E1123 03:57:38.933604 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.433594661 +0000 UTC m=+155.627266020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.941218 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:38 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:38 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:38 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.941591 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.949821 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" event={"ID":"48576b93-8342-4378-8e51-404c96d80d42","Type":"ContainerStarted","Data":"2a9546ada10881d9df55e730d6b9ed560d5a3dc5c314e53d66d7ad8f761c1aae"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.949867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" event={"ID":"48576b93-8342-4378-8e51-404c96d80d42","Type":"ContainerStarted","Data":"967181cb44418a7cc1a5038e9a49736dd22692fc21be11b1bcaf9ae736d2c85e"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.964012 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk"] Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.974774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" event={"ID":"e6d3230f-71b8-4238-94a0-51641c56dae7","Type":"ContainerStarted","Data":"a535fe236aa97e17946c7321b97aae2ddd1cb67009761f3dc9ec0281b8b61094"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.974816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" event={"ID":"e6d3230f-71b8-4238-94a0-51641c56dae7","Type":"ContainerStarted","Data":"d89d256263309e10aa4499cb31beae4f128fd778be13636838b18599d0b1cf21"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.974826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" event={"ID":"e6d3230f-71b8-4238-94a0-51641c56dae7","Type":"ContainerStarted","Data":"e03a112aad65128b38c1b489d1a209b1d089eaad1d57517d526ed99aec6d0afb"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.979287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" event={"ID":"e8f77961-5ffa-496c-ac26-aac2c5461f14","Type":"ContainerStarted","Data":"615c86dda0a36af0f34c4298e50109665453f0b3139e20886be4fbd96746ddec"} Nov 23 03:57:38 crc kubenswrapper[4751]: I1123 03:57:38.979326 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" event={"ID":"e8f77961-5ffa-496c-ac26-aac2c5461f14","Type":"ContainerStarted","Data":"7493cc93e08375c01b086059a8d9ad544fa4dc1bb7b5a8ee1a15c6ea5f7c2560"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.024013 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.035743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.036374 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.536354541 +0000 UTC m=+155.730025900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.037911 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gbjkx"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.070728 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbrhw" event={"ID":"4baedc4d-15a1-49d0-b82f-a57fce419702","Type":"ContainerStarted","Data":"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.079228 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbrhw" event={"ID":"4baedc4d-15a1-49d0-b82f-a57fce419702","Type":"ContainerStarted","Data":"d726835aeb97691c5993b1153d2436ee4ce21073426df601aac1a9e4895d6c7f"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.103239 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.103573 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sz452"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.123087 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" event={"ID":"1e98face-cbe3-455b-af70-2cd70f20f290","Type":"ContainerStarted","Data":"6ab6bd466d95fe0dc8ab6324dc27e108157c7e1d9e7935e050e455a0d3b34654"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.131181 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" event={"ID":"77124263-db88-438c-9e26-34a934746c27","Type":"ContainerStarted","Data":"3599ea55ef34711fbeb8cfc5695792ab66eb6490bad860f78d3d17847d95a5cf"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.131218 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" event={"ID":"77124263-db88-438c-9e26-34a934746c27","Type":"ContainerStarted","Data":"63a310f2d089dc181b7e5e1c7b0603e6aa324ad0e8a29d844284755d1a4c8f16"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.135686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" event={"ID":"a3841659-4757-4975-9d37-7d70ebea2dcb","Type":"ContainerStarted","Data":"8805d10de4a62b5bad3c2e579a22b884a4088efeb244f44a331b00bbb5c46f9b"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.135716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" event={"ID":"a3841659-4757-4975-9d37-7d70ebea2dcb","Type":"ContainerStarted","Data":"d8993b45e47dbb9922981ae06645ae64ded1bc0085a591f04ca56a319ebbbc26"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.139067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.140258 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.640246192 +0000 UTC m=+155.833917551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.140604 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" event={"ID":"af9edb45-f2d2-41c5-b003-a5f2d2777cc8","Type":"ContainerStarted","Data":"a5e1d6122f247c948c669ecb1279242f45e1eb808ceb09f4033aa07c816dce16"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.155865 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" podStartSLOduration=131.155848716 podStartE2EDuration="2m11.155848716s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.120978629 +0000 UTC m=+155.314649988" watchObservedRunningTime="2025-11-23 03:57:39.155848716 +0000 UTC m=+155.349520075" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.174774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" event={"ID":"5f21aa19-7efd-424d-9ce7-b735d8356d64","Type":"ContainerStarted","Data":"c733b827caa63b20574ea88d240a9f5b642999eb0d7ebea530a0b8ba6e981dd9"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.174815 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" event={"ID":"5f21aa19-7efd-424d-9ce7-b735d8356d64","Type":"ContainerStarted","Data":"652b7eed48cc2b17762d3569f574d483fc798f3c5fdfa6b1c401c22ba31b4886"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.244789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.245581 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" event={"ID":"455415b4-ad3a-4984-ad2e-50cc10802f94","Type":"ContainerStarted","Data":"9e1985e4cdf02bed56e1fc302f3a11f7fc5a3e26bae6a79a0a5c6883945f9c6f"} Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.245847 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.745832519 +0000 UTC m=+155.939503878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.246093 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.266481 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.272257 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8kg8p" podStartSLOduration=131.272240006 podStartE2EDuration="2m11.272240006s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.271607849 +0000 UTC m=+155.465279208" watchObservedRunningTime="2025-11-23 03:57:39.272240006 +0000 UTC m=+155.465911365" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.305303 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" event={"ID":"602b1e45-48d9-4d06-981c-b9c45cb18c1b","Type":"ContainerStarted","Data":"a45f775e4012b45cf044419ac66cf44e0c95c73cddf0144b84e75e26109605ab"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.330020 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.345834 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.347553 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.847539441 +0000 UTC m=+156.041210800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.351688 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npjlm" event={"ID":"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596","Type":"ContainerStarted","Data":"d5f18ae47f5f95651016a6d5aa783696e853df3412d7f9fbfbaab4f11649f890"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.352277 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.355190 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.397113 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" event={"ID":"983f8d3e-cb51-4b5d-b11b-d28c27a334f0","Type":"ContainerStarted","Data":"40118f3bbd87702be6bfbdc385659760f5f3675e4258a6294401b1e98e045e57"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.397143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" event={"ID":"983f8d3e-cb51-4b5d-b11b-d28c27a334f0","Type":"ContainerStarted","Data":"069f13eda87fd9379c72de38c5cf021c5018f5b5ea9ae821eb2a44d579c22320"} Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.398523 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-n7llf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.398567 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n7llf" podUID="d23032bc-8a84-4925-b4de-f2622d042320" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.400989 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.422952 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4qdtn"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.435004 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" podStartSLOduration=131.434991045 podStartE2EDuration="2m11.434991045s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.433655039 +0000 UTC m=+155.627326398" watchObservedRunningTime="2025-11-23 03:57:39.434991045 +0000 UTC m=+155.628662404" Nov 23 03:57:39 crc kubenswrapper[4751]: W1123 03:57:39.447448 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a236822_dbca_49e0_af7f_417172a744e0.slice/crio-a591ab8c82ff6bc4fc247c5d5c6ad22b6951a1e07c0ed99e4f81bca40efe4d6e WatchSource:0}: Error finding container a591ab8c82ff6bc4fc247c5d5c6ad22b6951a1e07c0ed99e4f81bca40efe4d6e: Status 404 returned error can't find the container with id a591ab8c82ff6bc4fc247c5d5c6ad22b6951a1e07c0ed99e4f81bca40efe4d6e Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.447600 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-x8bwj" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.448162 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.448749 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.948729608 +0000 UTC m=+156.142400967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.449096 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.456923 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:39.95690829 +0000 UTC m=+156.150579649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.461563 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.461610 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8khnw"] Nov 23 03:57:39 crc kubenswrapper[4751]: W1123 03:57:39.465448 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff8f82b_9489_4ffb_8e5c_40b20a0c6877.slice/crio-cf7a3485cced3b3b5e45c16a78f8e2175e820e7144427788abd3a9e8c9e361ea WatchSource:0}: Error finding container cf7a3485cced3b3b5e45c16a78f8e2175e820e7144427788abd3a9e8c9e361ea: Status 404 returned error can't find the container with id cf7a3485cced3b3b5e45c16a78f8e2175e820e7144427788abd3a9e8c9e361ea Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.477740 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" podStartSLOduration=132.477727106 podStartE2EDuration="2m12.477727106s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.473421759 +0000 UTC m=+155.667093118" watchObservedRunningTime="2025-11-23 03:57:39.477727106 +0000 UTC m=+155.671398465" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.509592 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2xnr" podStartSLOduration=131.50957331 podStartE2EDuration="2m11.50957331s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.497362619 +0000 UTC m=+155.691033978" watchObservedRunningTime="2025-11-23 03:57:39.50957331 +0000 UTC m=+155.703244669" Nov 23 03:57:39 crc kubenswrapper[4751]: W1123 03:57:39.535512 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdaf1dc7_872b_4fe6_adeb_83b381d1754e.slice/crio-a731608909c595e2c3063be6786c383b15b8e36a66e58123b9e002b2c97239e0 WatchSource:0}: Error finding container a731608909c595e2c3063be6786c383b15b8e36a66e58123b9e002b2c97239e0: Status 404 returned error can't find the container with id a731608909c595e2c3063be6786c383b15b8e36a66e58123b9e002b2c97239e0 Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.549029 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" podStartSLOduration=131.549009341 podStartE2EDuration="2m11.549009341s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.535870224 +0000 UTC m=+155.729541583" watchObservedRunningTime="2025-11-23 03:57:39.549009341 +0000 UTC m=+155.742680700" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.556368 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.557017 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.056998248 +0000 UTC m=+156.250669607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.560566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.574172 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vfs9h" podStartSLOduration=131.574150494 podStartE2EDuration="2m11.574150494s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.573307721 +0000 UTC m=+155.766979080" watchObservedRunningTime="2025-11-23 03:57:39.574150494 +0000 UTC m=+155.767821853" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.586372 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.603693 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.605037 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" podStartSLOduration=131.605018512 podStartE2EDuration="2m11.605018512s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.598597157 +0000 UTC m=+155.792268516" watchObservedRunningTime="2025-11-23 03:57:39.605018512 +0000 UTC m=+155.798689871" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.660072 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.661539 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.161506465 +0000 UTC m=+156.355177824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.673230 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l4xns" podStartSLOduration=131.673215193 podStartE2EDuration="2m11.673215193s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.656672714 +0000 UTC m=+155.850344073" watchObservedRunningTime="2025-11-23 03:57:39.673215193 +0000 UTC m=+155.866886552" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.673684 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cg8zn"] Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.762149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.762419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.262387285 +0000 UTC m=+156.456058654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.762735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.763510 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.263474844 +0000 UTC m=+156.457146203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.793847 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vrbtv" podStartSLOduration=131.793832048 podStartE2EDuration="2m11.793832048s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.792743909 +0000 UTC m=+155.986415268" watchObservedRunningTime="2025-11-23 03:57:39.793832048 +0000 UTC m=+155.987503407" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.793939 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zc84k" podStartSLOduration=131.793935521 podStartE2EDuration="2m11.793935521s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.755327043 +0000 UTC m=+155.948998402" watchObservedRunningTime="2025-11-23 03:57:39.793935521 +0000 UTC m=+155.987606870" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.823367 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:39 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:39 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:39 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.823440 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.861065 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" podStartSLOduration=131.861049333 podStartE2EDuration="2m11.861049333s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.860988982 +0000 UTC m=+156.054660331" watchObservedRunningTime="2025-11-23 03:57:39.861049333 +0000 UTC m=+156.054720692" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.863318 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.863598 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.363584652 +0000 UTC m=+156.557256001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.936069 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wd8t9" podStartSLOduration=5.936051789 podStartE2EDuration="5.936051789s" podCreationTimestamp="2025-11-23 03:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.919725566 +0000 UTC m=+156.113396925" watchObservedRunningTime="2025-11-23 03:57:39.936051789 +0000 UTC m=+156.129723148" Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.966162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:39 crc kubenswrapper[4751]: E1123 03:57:39.966515 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.466502906 +0000 UTC m=+156.660174265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:39 crc kubenswrapper[4751]: I1123 03:57:39.993332 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bbrhw" podStartSLOduration=131.993309744 podStartE2EDuration="2m11.993309744s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:39.990210209 +0000 UTC m=+156.183881568" watchObservedRunningTime="2025-11-23 03:57:39.993309744 +0000 UTC m=+156.186981103" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.070058 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.070248 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.570225182 +0000 UTC m=+156.763896541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.070310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.070615 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.570604332 +0000 UTC m=+156.764275691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.084486 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q7w24" podStartSLOduration=132.084473359 podStartE2EDuration="2m12.084473359s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.08340135 +0000 UTC m=+156.277072709" watchObservedRunningTime="2025-11-23 03:57:40.084473359 +0000 UTC m=+156.278144718" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.085556 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pqqs9" podStartSLOduration=132.085552608 podStartE2EDuration="2m12.085552608s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.0285489 +0000 UTC m=+156.222220259" watchObservedRunningTime="2025-11-23 03:57:40.085552608 +0000 UTC m=+156.279223967" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.171283 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.171648 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.671634085 +0000 UTC m=+156.865305444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.192487 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" podStartSLOduration=133.192470821 podStartE2EDuration="2m13.192470821s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.191065993 +0000 UTC m=+156.384737352" watchObservedRunningTime="2025-11-23 03:57:40.192470821 +0000 UTC m=+156.386142180" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.275924 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.276461 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.776448731 +0000 UTC m=+156.970120090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.376828 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.377014 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.876992311 +0000 UTC m=+157.070663670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.377062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.377422 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.877413153 +0000 UTC m=+157.071084512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.465111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" event={"ID":"c6326297-0945-46f8-825e-96e057069445","Type":"ContainerStarted","Data":"05bece77b5c9b5076b7d74df457ebe8bdf427419b8d1a41444eb31e48eff196e"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.465160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" event={"ID":"c6326297-0945-46f8-825e-96e057069445","Type":"ContainerStarted","Data":"d67a05fa7c8bfced9ccfcc2f720818b4d96fba605865cb0cd7794174f7c6ff4c"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.470571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k7zwz" event={"ID":"77124263-db88-438c-9e26-34a934746c27","Type":"ContainerStarted","Data":"447e3823d3c4fc15f66cf80e934137e9bebf77fe4dc9085e98921290f8787a98"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.478338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.478470 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.978443386 +0000 UTC m=+157.172114745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.478599 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.478930 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:40.978922809 +0000 UTC m=+157.172594168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.502260 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-npjlm" podStartSLOduration=6.502240892 podStartE2EDuration="6.502240892s" podCreationTimestamp="2025-11-23 03:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.227152653 +0000 UTC m=+156.420824012" watchObservedRunningTime="2025-11-23 03:57:40.502240892 +0000 UTC m=+156.695912251" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.503614 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r2sbr" podStartSLOduration=132.503606309 podStartE2EDuration="2m12.503606309s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.489296431 +0000 UTC m=+156.682967790" watchObservedRunningTime="2025-11-23 03:57:40.503606309 +0000 UTC m=+156.697277668" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.544015 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" event={"ID":"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877","Type":"ContainerStarted","Data":"cf7a3485cced3b3b5e45c16a78f8e2175e820e7144427788abd3a9e8c9e361ea"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.556931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" event={"ID":"243f13ba-48c3-46bc-9fa1-d345fd0ea542","Type":"ContainerStarted","Data":"19db5a8017cf4c2f1c7850b1caee2eda79a14f9d187e01d2c8437487ff7fe28d"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.556974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" event={"ID":"243f13ba-48c3-46bc-9fa1-d345fd0ea542","Type":"ContainerStarted","Data":"627a11c9a8aa8790e9ce5e9ca7098abff8a59823146de43150568f5940d96802"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.579610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.580722 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.080706062 +0000 UTC m=+157.274377421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.588705 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" podStartSLOduration=132.588690639 podStartE2EDuration="2m12.588690639s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.581463773 +0000 UTC m=+156.775135132" watchObservedRunningTime="2025-11-23 03:57:40.588690639 +0000 UTC m=+156.782361998" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.594835 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" event={"ID":"fdaf1dc7-872b-4fe6-adeb-83b381d1754e","Type":"ContainerStarted","Data":"a731608909c595e2c3063be6786c383b15b8e36a66e58123b9e002b2c97239e0"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.612626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" event={"ID":"b284192d-dce2-4f47-b7bf-b44841965150","Type":"ContainerStarted","Data":"1b0edbb0fd435f5a3758c6913286143cbb8c31676a3ed6522ef2fc6a8c7bb955"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.612946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" event={"ID":"b284192d-dce2-4f47-b7bf-b44841965150","Type":"ContainerStarted","Data":"f5e9c6665aa3fca870cee2377e2afd9447054996e591de4a5f00565dc0c05f2e"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.614841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" event={"ID":"b0b3ac60-91ee-4571-8256-69d2ff1f53bd","Type":"ContainerStarted","Data":"5026a048dc8cc2668461a7ab19e8a294cbd214f0ee51297b9b77c26c9aacb2f5"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.614884 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" event={"ID":"b0b3ac60-91ee-4571-8256-69d2ff1f53bd","Type":"ContainerStarted","Data":"d6d4466d50069b025c3e58cd9f7b4d3514bf4fa34db152e00c1f5e45e7242c98"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.615143 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.622697 4751 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7s6fp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.622751 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" podUID="b0b3ac60-91ee-4571-8256-69d2ff1f53bd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.632778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" event={"ID":"1d0ec04b-8c97-487a-a899-9b1a20e6a96a","Type":"ContainerStarted","Data":"4af5fee8a55f93382982467c477bd7b78d537e8eee026c13c1878d2fd40ec740"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.632822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" event={"ID":"1d0ec04b-8c97-487a-a899-9b1a20e6a96a","Type":"ContainerStarted","Data":"48b9c1705cd89245a094de1aeec69825cb083eabed0888cb650d1359eb2f77f3"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.632833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" event={"ID":"1d0ec04b-8c97-487a-a899-9b1a20e6a96a","Type":"ContainerStarted","Data":"92d91c5092250a35706110cbed4b905e47c6add87c4fab409aa32f45480ca742"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.633446 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.678660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" event={"ID":"4f029c90-06f4-4a4e-8f4a-8620a6455c5a","Type":"ContainerStarted","Data":"2a045d23f31f6e895c3720e839e480be6ad2acd824b1e9a6a2b57a3943e3f289"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.678703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" event={"ID":"4f029c90-06f4-4a4e-8f4a-8620a6455c5a","Type":"ContainerStarted","Data":"d4aaf4b9e7fa85a74d5435dfbc3b8b43143e70df935e22dc3884d33eca8bacee"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.680824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.682037 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.182025523 +0000 UTC m=+157.375696872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.694749 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" podStartSLOduration=132.694735419 podStartE2EDuration="2m12.694735419s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.668773864 +0000 UTC m=+156.862445223" watchObservedRunningTime="2025-11-23 03:57:40.694735419 +0000 UTC m=+156.888406778" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.699521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" event={"ID":"c7428e18-93c0-4bc3-b774-93e08c88ff63","Type":"ContainerStarted","Data":"37ccd6b204b5e2fccab79ccf1e3b4cbb7d07f54d848336e0f3dd915d3e5ffb5d"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.708100 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" event={"ID":"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033","Type":"ContainerStarted","Data":"7d5a25ef386c170b909584a5206e8798fc7331277b16f00b41072d2939f5339c"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.708184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" event={"ID":"fa7612e7-e0b7-4b66-a948-fc5bc3aa3033","Type":"ContainerStarted","Data":"dfccb27c0c5afea05bc3eaf7fdead5fb8f2db985045bcc9341fd82c850310579"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.738500 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" event={"ID":"acadd328-b207-48a2-9041-00d517ed7c39","Type":"ContainerStarted","Data":"3fb5ac664db0882599597ac21b0eb6ea392b56fd5b05656ee66c91b961f8f5af"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.738548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" event={"ID":"acadd328-b207-48a2-9041-00d517ed7c39","Type":"ContainerStarted","Data":"d443744e5ff5d4931fbe769aba80ccbfba77f139fd79fd5217690a2a92362c9e"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.739441 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.750209 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" event={"ID":"1dd84f54-df4c-4c6c-af61-4f92716457a4","Type":"ContainerStarted","Data":"929ae00a2e56d38b6c3721e2fb14df31a68a5da648ccd442e71eceee164656c3"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.759768 4751 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d7vbk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.759819 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" podUID="acadd328-b207-48a2-9041-00d517ed7c39" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.759983 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gbjkx" event={"ID":"3d3ce6ac-9cf3-400d-99ce-3546ee615007","Type":"ContainerStarted","Data":"6d0b642e50f65edee730c5afa514b26727557b31a6ea39fdf8527d818374ebfa"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.760025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gbjkx" event={"ID":"3d3ce6ac-9cf3-400d-99ce-3546ee615007","Type":"ContainerStarted","Data":"28628d7f0aded62021361a289ad9036a23b85e440986d632fe08f0fe3557400f"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.770722 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.770757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.771864 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" podStartSLOduration=132.771855173 podStartE2EDuration="2m12.771855173s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.694734479 +0000 UTC m=+156.888405838" watchObservedRunningTime="2025-11-23 03:57:40.771855173 +0000 UTC m=+156.965526532" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.788389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.789137 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.289119751 +0000 UTC m=+157.482791110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.796583 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" event={"ID":"37c87365-7c6f-4f74-957d-3511c274b1c0","Type":"ContainerStarted","Data":"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.796623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" event={"ID":"37c87365-7c6f-4f74-957d-3511c274b1c0","Type":"ContainerStarted","Data":"3aa707a92dc10e4c7b6502a49815810dc975af268296f9020124588e818e7800"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.797170 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.809394 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.819906 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rfmp4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.819972 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.821252 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" podStartSLOduration=133.821242664 podStartE2EDuration="2m13.821242664s" podCreationTimestamp="2025-11-23 03:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.820586866 +0000 UTC m=+157.014258225" watchObservedRunningTime="2025-11-23 03:57:40.821242664 +0000 UTC m=+157.014914023" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.821552 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tpvvr" event={"ID":"a3841659-4757-4975-9d37-7d70ebea2dcb","Type":"ContainerStarted","Data":"03e59b3f7f3f7a2cac109923597ac037122db71555ba2743f09d257e474af9a3"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.822375 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" podStartSLOduration=132.822368004 podStartE2EDuration="2m12.822368004s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.773644661 +0000 UTC m=+156.967316020" watchObservedRunningTime="2025-11-23 03:57:40.822368004 +0000 UTC m=+157.016039363" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.836375 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:40 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:40 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:40 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.836416 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.850752 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-npjlm" event={"ID":"4c11da5a-8eab-4e0a-a06c-0c38d7cd8596","Type":"ContainerStarted","Data":"4b56f1118aa35164b4f59164a635adb0c60a1c7040503c8d48a23aaf1c28e95a"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.860095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" event={"ID":"8a236822-dbca-49e0-af7f-417172a744e0","Type":"ContainerStarted","Data":"b49c0cef171de600040029cb950009813578e054c09f1cc083f299f63625f300"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.860155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" event={"ID":"8a236822-dbca-49e0-af7f-417172a744e0","Type":"ContainerStarted","Data":"a591ab8c82ff6bc4fc247c5d5c6ad22b6951a1e07c0ed99e4f81bca40efe4d6e"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.860999 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.867266 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4mkpq" podStartSLOduration=132.867248233 podStartE2EDuration="2m12.867248233s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.855206366 +0000 UTC m=+157.048877725" watchObservedRunningTime="2025-11-23 03:57:40.867248233 +0000 UTC m=+157.060919602" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.875837 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8pfs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.875898 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" podUID="8a236822-dbca-49e0-af7f-417172a744e0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.878555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" event={"ID":"b1680deb-dbd2-45ee-9556-8a478b70ea3d","Type":"ContainerStarted","Data":"7bf5556467a668dd0385a1327a9a683542f9214a4a143ae1dc6b772ea4564c3b"} Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.879127 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-n7llf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.879154 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n7llf" podUID="d23032bc-8a84-4925-b4de-f2622d042320" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.879442 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.890139 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.891756 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.391744488 +0000 UTC m=+157.585415837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.891964 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nt2zf" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.984085 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" podStartSLOduration=132.984072175 podStartE2EDuration="2m12.984072175s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:40.981103654 +0000 UTC m=+157.174775013" watchObservedRunningTime="2025-11-23 03:57:40.984072175 +0000 UTC m=+157.177743534" Nov 23 03:57:40 crc kubenswrapper[4751]: I1123 03:57:40.991642 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:40 crc kubenswrapper[4751]: E1123 03:57:40.993080 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.493065329 +0000 UTC m=+157.686736688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.011433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.012449 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.015776 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cbnj9" podStartSLOduration=133.015758875 podStartE2EDuration="2m13.015758875s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.015732104 +0000 UTC m=+157.209403473" watchObservedRunningTime="2025-11-23 03:57:41.015758875 +0000 UTC m=+157.209430234" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.047517 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gbjkx" podStartSLOduration=7.047503897 podStartE2EDuration="7.047503897s" podCreationTimestamp="2025-11-23 03:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.046069048 +0000 UTC m=+157.239740407" watchObservedRunningTime="2025-11-23 03:57:41.047503897 +0000 UTC m=+157.241175256" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.048359 4751 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vdnb9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]log ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]etcd ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/max-in-flight-filter ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 23 03:57:41 crc kubenswrapper[4751]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 23 03:57:41 crc kubenswrapper[4751]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/project.openshift.io-projectcache ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/openshift.io-startinformers ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 23 03:57:41 crc kubenswrapper[4751]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 03:57:41 crc kubenswrapper[4751]: livez check failed Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.048447 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" podUID="ae952398-26e2-4b90-8df3-5cb6ff9529e9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.064613 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.077743 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" podStartSLOduration=133.077728488 podStartE2EDuration="2m13.077728488s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.077501441 +0000 UTC m=+157.271172800" watchObservedRunningTime="2025-11-23 03:57:41.077728488 +0000 UTC m=+157.271399847" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.100860 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.102090 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.602074629 +0000 UTC m=+157.795745988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.202076 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.202363 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.702329681 +0000 UTC m=+157.896001040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.239381 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" podStartSLOduration=133.239363336 podStartE2EDuration="2m13.239363336s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.236953901 +0000 UTC m=+157.430625260" watchObservedRunningTime="2025-11-23 03:57:41.239363336 +0000 UTC m=+157.433034695" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.307153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.307876 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.807865816 +0000 UTC m=+158.001537175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.409423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.409654 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.909629459 +0000 UTC m=+158.103300818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.410453 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.410775 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:41.91076263 +0000 UTC m=+158.104433989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.511921 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.512301 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.012286717 +0000 UTC m=+158.205958076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.613330 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.613610 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.113598628 +0000 UTC m=+158.307269987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.713962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.714306 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.214292832 +0000 UTC m=+158.407964191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.813355 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:41 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:41 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:41 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.813619 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.814987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.815274 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.315263203 +0000 UTC m=+158.508934562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.884980 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" event={"ID":"1dd84f54-df4c-4c6c-af61-4f92716457a4","Type":"ContainerStarted","Data":"f72eb6a1417cbe41080313dfa95573631bf23e8ccbce9356cf68d30ce4890424"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.886452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sz452" event={"ID":"243f13ba-48c3-46bc-9fa1-d345fd0ea542","Type":"ContainerStarted","Data":"8fabb82bb24b8e5ad70b04284439cd08795082f6bce112f6bd48047412902041"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.887976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pvgv8" event={"ID":"fdaf1dc7-872b-4fe6-adeb-83b381d1754e","Type":"ContainerStarted","Data":"9764c2cc7317bee4d1bddb28eb1feab54e958cda61d5ebfd23ff202fa3fd5bbd"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.889329 4751 generic.go:334] "Generic (PLEG): container finished" podID="b284192d-dce2-4f47-b7bf-b44841965150" containerID="1b0edbb0fd435f5a3758c6913286143cbb8c31676a3ed6522ef2fc6a8c7bb955" exitCode=0 Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.889397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" event={"ID":"b284192d-dce2-4f47-b7bf-b44841965150","Type":"ContainerDied","Data":"1b0edbb0fd435f5a3758c6913286143cbb8c31676a3ed6522ef2fc6a8c7bb955"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.891004 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" event={"ID":"b1680deb-dbd2-45ee-9556-8a478b70ea3d","Type":"ContainerStarted","Data":"87b16641538919b43e7496836395684b8b175479c40d494097e3fee9ea560da0"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.891055 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" event={"ID":"b1680deb-dbd2-45ee-9556-8a478b70ea3d","Type":"ContainerStarted","Data":"341e27fb63d9862ca72a88020e3eb3a0ed000c8ba1def3caef23940a8fc061b0"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.892213 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" event={"ID":"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877","Type":"ContainerStarted","Data":"e1cbf8e67a0d4d3f4029751fd8ba868cba1aa8574d5943a1c8e7295e47830172"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.893499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" event={"ID":"c7428e18-93c0-4bc3-b774-93e08c88ff63","Type":"ContainerStarted","Data":"f37a323358be54caf47910239541b9d11bbf2fd8d0e0b1459b3bb92e395c1f63"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.893548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" event={"ID":"c7428e18-93c0-4bc3-b774-93e08c88ff63","Type":"ContainerStarted","Data":"d997a0d1269c23eb370edbc5cd77eede56e25fc8957c1d638cd7917c4d0cdab1"} Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.894263 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rfmp4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.894305 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.898112 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d7vbk" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.902441 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7gtxn" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.909355 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7s6fp" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.911875 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6q5kx" podStartSLOduration=133.911855486 podStartE2EDuration="2m13.911855486s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.908060573 +0000 UTC m=+158.101731932" watchObservedRunningTime="2025-11-23 03:57:41.911855486 +0000 UTC m=+158.105526835" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.915730 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.915916 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.415887495 +0000 UTC m=+158.609558854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.916931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:41 crc kubenswrapper[4751]: E1123 03:57:41.917174 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.41716601 +0000 UTC m=+158.610837369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.954279 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8khnw" podStartSLOduration=133.954263927 podStartE2EDuration="2m13.954263927s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.95364369 +0000 UTC m=+158.147315049" watchObservedRunningTime="2025-11-23 03:57:41.954263927 +0000 UTC m=+158.147935286" Nov 23 03:57:41 crc kubenswrapper[4751]: I1123 03:57:41.978279 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cg8zn" podStartSLOduration=133.978264599 podStartE2EDuration="2m13.978264599s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:41.976605884 +0000 UTC m=+158.170277233" watchObservedRunningTime="2025-11-23 03:57:41.978264599 +0000 UTC m=+158.171935958" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.018835 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.020097 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.520081094 +0000 UTC m=+158.713752453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.062665 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8pfs" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.120299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.120614 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.620603374 +0000 UTC m=+158.814274723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.220973 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.221453 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.721438662 +0000 UTC m=+158.915110021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.322871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.323174 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.823164204 +0000 UTC m=+159.016835563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.424331 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.424533 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:42.924494685 +0000 UTC m=+159.118166044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.526535 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.526825 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:43.026813523 +0000 UTC m=+159.220484882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.627186 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.627458 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:43.127428325 +0000 UTC m=+159.321099684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.627683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.628031 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:43.128020831 +0000 UTC m=+159.321692190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.631127 4751 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.728874 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.729235 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 03:57:43.229209838 +0000 UTC m=+159.422881197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.729626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: E1123 03:57:42.729938 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 03:57:43.229931348 +0000 UTC m=+159.423602707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p6j49" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.802015 4751 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T03:57:42.631215498Z","Handler":null,"Name":""} Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.818561 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:42 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:42 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:42 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.819037 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.823283 4751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.823331 4751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.831009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.841868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.898401 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.900114 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.900319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" event={"ID":"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877","Type":"ContainerStarted","Data":"956d8d94a0f4286fcad1ac3822ca6eca1de62463e63c0d75f95eeab61ea716b1"} Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.900447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" event={"ID":"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877","Type":"ContainerStarted","Data":"e19d18321e5ddbab2202a64d57ce89ee9e8d83a9507fe44c43f1ecb7c0e4e0d6"} Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.900464 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" event={"ID":"2ff8f82b-9489-4ffb-8e5c-40b20a0c6877","Type":"ContainerStarted","Data":"5161bb2c3b06f67573380e543041003f046e22a7f3f38c403a239e452a2e1c3e"} Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.901990 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rfmp4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.902032 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.903916 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.916663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.934738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.935950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.936023 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5fn\" (UniqueName: \"kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.936120 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.941808 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.941843 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.952161 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4qdtn" podStartSLOduration=8.952146952 podStartE2EDuration="8.952146952s" podCreationTimestamp="2025-11-23 03:57:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:42.949944902 +0000 UTC m=+159.143616281" watchObservedRunningTime="2025-11-23 03:57:42.952146952 +0000 UTC m=+159.145818311" Nov 23 03:57:42 crc kubenswrapper[4751]: I1123 03:57:42.995214 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p6j49\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.038099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5fn\" (UniqueName: \"kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.038323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.038409 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.038892 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.039000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.057086 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5fn\" (UniqueName: \"kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn\") pod \"community-operators-8zg5t\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.090480 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.091438 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.093733 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.096271 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.106987 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.114685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume\") pod \"b284192d-dce2-4f47-b7bf-b44841965150\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume\") pod \"b284192d-dce2-4f47-b7bf-b44841965150\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139123 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s\") pod \"b284192d-dce2-4f47-b7bf-b44841965150\" (UID: \"b284192d-dce2-4f47-b7bf-b44841965150\") " Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139354 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qh6\" (UniqueName: \"kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.139449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.140305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume" (OuterVolumeSpecName: "config-volume") pod "b284192d-dce2-4f47-b7bf-b44841965150" (UID: "b284192d-dce2-4f47-b7bf-b44841965150"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.145475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b284192d-dce2-4f47-b7bf-b44841965150" (UID: "b284192d-dce2-4f47-b7bf-b44841965150"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.145540 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s" (OuterVolumeSpecName: "kube-api-access-nf48s") pod "b284192d-dce2-4f47-b7bf-b44841965150" (UID: "b284192d-dce2-4f47-b7bf-b44841965150"). InnerVolumeSpecName "kube-api-access-nf48s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.223099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240031 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qh6\" (UniqueName: \"kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240204 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284192d-dce2-4f47-b7bf-b44841965150-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240219 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284192d-dce2-4f47-b7bf-b44841965150-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.240229 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf48s\" (UniqueName: \"kubernetes.io/projected/b284192d-dce2-4f47-b7bf-b44841965150-kube-api-access-nf48s\") on node \"crc\" DevicePath \"\"" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.241103 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.241199 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.261091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qh6\" (UniqueName: \"kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6\") pod \"certified-operators-sfnmz\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.292759 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:57:43 crc kubenswrapper[4751]: E1123 03:57:43.292936 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b284192d-dce2-4f47-b7bf-b44841965150" containerName="collect-profiles" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.292947 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b284192d-dce2-4f47-b7bf-b44841965150" containerName="collect-profiles" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.293045 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b284192d-dce2-4f47-b7bf-b44841965150" containerName="collect-profiles" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.294073 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.301521 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.309133 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:57:43 crc kubenswrapper[4751]: W1123 03:57:43.320827 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5740c4_4925_4b31_a055_45993f3811b8.slice/crio-95e393fc7f70d2b46f5339075821c6f4724ccb849c4b885a55abf033cc4bc96c WatchSource:0}: Error finding container 95e393fc7f70d2b46f5339075821c6f4724ccb849c4b885a55abf033cc4bc96c: Status 404 returned error can't find the container with id 95e393fc7f70d2b46f5339075821c6f4724ccb849c4b885a55abf033cc4bc96c Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.341156 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.341251 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.341297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtfr\" (UniqueName: \"kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.396897 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.415945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.442668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtfr\" (UniqueName: \"kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.442732 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.442784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.443169 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.443965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.460185 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtfr\" (UniqueName: \"kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr\") pod \"community-operators-d84pb\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.496933 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.507494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.515034 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.544991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.545082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.545122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vdh\" (UniqueName: \"kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.618699 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.647920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vdh\" (UniqueName: \"kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.648022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.648107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.648938 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.649745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.670024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vdh\" (UniqueName: \"kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh\") pod \"certified-operators-rv46s\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.786738 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:57:43 crc kubenswrapper[4751]: W1123 03:57:43.793979 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74455a48_b084_43b7_9281_c31fd2267f1d.slice/crio-3a6d7bf3e050028b2bfecd2804cf002124bb337937f131424cfeeea3ad155e3d WatchSource:0}: Error finding container 3a6d7bf3e050028b2bfecd2804cf002124bb337937f131424cfeeea3ad155e3d: Status 404 returned error can't find the container with id 3a6d7bf3e050028b2bfecd2804cf002124bb337937f131424cfeeea3ad155e3d Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.813266 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:43 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:43 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:43 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.813366 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.827515 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.839606 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:57:43 crc kubenswrapper[4751]: W1123 03:57:43.874967 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cbb52a7_4f63_42d4_9958_00ff3dc1c242.slice/crio-6db6db212fd7b5ffc59eb3c0aef981a7a61493e869be3747390ba3acc9493a03 WatchSource:0}: Error finding container 6db6db212fd7b5ffc59eb3c0aef981a7a61493e869be3747390ba3acc9493a03: Status 404 returned error can't find the container with id 6db6db212fd7b5ffc59eb3c0aef981a7a61493e869be3747390ba3acc9493a03 Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.906756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" event={"ID":"b284192d-dce2-4f47-b7bf-b44841965150","Type":"ContainerDied","Data":"f5e9c6665aa3fca870cee2377e2afd9447054996e591de4a5f00565dc0c05f2e"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.906831 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e9c6665aa3fca870cee2377e2afd9447054996e591de4a5f00565dc0c05f2e" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.906801 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.911175 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerStarted","Data":"3a6d7bf3e050028b2bfecd2804cf002124bb337937f131424cfeeea3ad155e3d"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.917750 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" event={"ID":"5f5740c4-4925-4b31-a055-45993f3811b8","Type":"ContainerStarted","Data":"8670e560dc9b91bab57406729a34aeea560c4aefc9d2277faa478f3e38f18a9f"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.917790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" event={"ID":"5f5740c4-4925-4b31-a055-45993f3811b8","Type":"ContainerStarted","Data":"95e393fc7f70d2b46f5339075821c6f4724ccb849c4b885a55abf033cc4bc96c"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.918522 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.921327 4751 generic.go:334] "Generic (PLEG): container finished" podID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerID="f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d" exitCode=0 Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.921407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerDied","Data":"f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.921430 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerStarted","Data":"81bfcb5e037d6b61fa74c749a0ace797a840ff36294331620850168a5661fc08"} Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.923103 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 03:57:43 crc kubenswrapper[4751]: I1123 03:57:43.933555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerStarted","Data":"6db6db212fd7b5ffc59eb3c0aef981a7a61493e869be3747390ba3acc9493a03"} Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.015980 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" podStartSLOduration=136.015963065 podStartE2EDuration="2m16.015963065s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:43.994569114 +0000 UTC m=+160.188240473" watchObservedRunningTime="2025-11-23 03:57:44.015963065 +0000 UTC m=+160.209634424" Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.091530 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:57:44 crc kubenswrapper[4751]: W1123 03:57:44.153370 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e5fb7c1_6564_43b1_9013_04675d027cea.slice/crio-f87af897e5dfe8fcf2a09be3f4da881f6cedfa2914cc28b510fa00054c878b52 WatchSource:0}: Error finding container f87af897e5dfe8fcf2a09be3f4da881f6cedfa2914cc28b510fa00054c878b52: Status 404 returned error can't find the container with id f87af897e5dfe8fcf2a09be3f4da881f6cedfa2914cc28b510fa00054c878b52 Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.660593 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.812632 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:44 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:44 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:44 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.812699 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.901189 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.905997 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.909447 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.915441 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.940379 4751 generic.go:334] "Generic (PLEG): container finished" podID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerID="16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3" exitCode=0 Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.940489 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerDied","Data":"16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3"} Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.940526 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerStarted","Data":"f87af897e5dfe8fcf2a09be3f4da881f6cedfa2914cc28b510fa00054c878b52"} Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.946324 4751 generic.go:334] "Generic (PLEG): container finished" podID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerID="33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8" exitCode=0 Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.946378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerDied","Data":"33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8"} Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.956605 4751 generic.go:334] "Generic (PLEG): container finished" podID="74455a48-b084-43b7-9281-c31fd2267f1d" containerID="8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0" exitCode=0 Nov 23 03:57:44 crc kubenswrapper[4751]: I1123 03:57:44.956748 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerDied","Data":"8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0"} Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.064461 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.065127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.065610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxvd\" (UniqueName: \"kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.167032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmxvd\" (UniqueName: \"kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.167089 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.167148 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.167551 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.168005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.206814 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmxvd\" (UniqueName: \"kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd\") pod \"redhat-marketplace-mwj8h\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.226919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.292721 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.294092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.345387 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.468185 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.471042 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.471084 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbx6\" (UniqueName: \"kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.471133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: W1123 03:57:45.471155 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b58dce2_ca9b_436c_a5a9_fcf1286eb473.slice/crio-9b4c0506fc0511f7ad605904124700717473d8169a9beade6fc410731be9b1e0 WatchSource:0}: Error finding container 9b4c0506fc0511f7ad605904124700717473d8169a9beade6fc410731be9b1e0: Status 404 returned error can't find the container with id 9b4c0506fc0511f7ad605904124700717473d8169a9beade6fc410731be9b1e0 Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.572077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.572124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkbx6\" (UniqueName: \"kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.572155 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.572673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.572934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.591880 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkbx6\" (UniqueName: \"kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6\") pod \"redhat-marketplace-fbmgp\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.663610 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.812205 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:45 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:45 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:45 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.812507 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.931707 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:57:45 crc kubenswrapper[4751]: W1123 03:57:45.942704 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff8845b_6eba_433d_9caf_aff92ac145b4.slice/crio-353087bfe90c5041996f97a61bb79b3902b1cef195d98da7935b506dd54da518 WatchSource:0}: Error finding container 353087bfe90c5041996f97a61bb79b3902b1cef195d98da7935b506dd54da518: Status 404 returned error can't find the container with id 353087bfe90c5041996f97a61bb79b3902b1cef195d98da7935b506dd54da518 Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.972083 4751 generic.go:334] "Generic (PLEG): container finished" podID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerID="942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007" exitCode=0 Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.972151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerDied","Data":"942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007"} Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.972184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerStarted","Data":"9b4c0506fc0511f7ad605904124700717473d8169a9beade6fc410731be9b1e0"} Nov 23 03:57:45 crc kubenswrapper[4751]: I1123 03:57:45.976555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerStarted","Data":"353087bfe90c5041996f97a61bb79b3902b1cef195d98da7935b506dd54da518"} Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.008568 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.012543 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vdnb9" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.105395 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.139408 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.140697 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.152063 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.281547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.281769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.281819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hpk\" (UniqueName: \"kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.383295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hpk\" (UniqueName: \"kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.383392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.383413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.383796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.383853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.412263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hpk\" (UniqueName: \"kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk\") pod \"redhat-operators-ngtqn\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.470848 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.494561 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-n7llf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.494584 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-n7llf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.494614 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n7llf" podUID="d23032bc-8a84-4925-b4de-f2622d042320" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.494633 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-n7llf" podUID="d23032bc-8a84-4925-b4de-f2622d042320" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.497213 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.498200 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.511172 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.587270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.587522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmpw\" (UniqueName: \"kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.587574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.688324 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.688433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtmpw\" (UniqueName: \"kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.688457 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.688862 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.689271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.704679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtmpw\" (UniqueName: \"kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw\") pod \"redhat-operators-c8mfw\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.719939 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.720015 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.721811 4751 patch_prober.go:28] interesting pod/console-f9d7485db-bbrhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.721862 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bbrhw" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.810360 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.811085 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.812734 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:46 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:46 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:46 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.812783 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.926951 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.987570 4751 generic.go:334] "Generic (PLEG): container finished" podID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerID="1c6502c2eb9c98f1847d2e25999919671ca0ef37422445f6bce266ce4911f729" exitCode=0 Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.987626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerDied","Data":"1c6502c2eb9c98f1847d2e25999919671ca0ef37422445f6bce266ce4911f729"} Nov 23 03:57:46 crc kubenswrapper[4751]: I1123 03:57:46.989928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerStarted","Data":"06dbf6e5e27cd422f203dacfb16f7d44dad1d6a21a7521a97cca1ffd011ffad3"} Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.177853 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.298371 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.299102 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.301184 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.301538 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.302579 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.407577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.407845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.508827 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.509181 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.509321 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.511740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.538688 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.618184 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.812388 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:47 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:47 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:47 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.812439 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:47 crc kubenswrapper[4751]: I1123 03:57:47.975752 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.019657 4751 generic.go:334] "Generic (PLEG): container finished" podID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerID="aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00" exitCode=0 Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.019738 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerDied","Data":"aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00"} Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.030644 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerID="125dc2e20ec420a54be56146fe48509e615984d60b5530c178b925a5fe43aa03" exitCode=0 Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.030737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerDied","Data":"125dc2e20ec420a54be56146fe48509e615984d60b5530c178b925a5fe43aa03"} Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.031194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerStarted","Data":"891f6a172e1774a2a6e91a2f62668b54d4b862271e900377f3e52e94b03c7d6e"} Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.168127 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.169277 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.171466 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.172235 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.179085 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.320369 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.320447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.421633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.421722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.421781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.437820 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.494388 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.533866 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-npjlm" Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.811913 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:48 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:48 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:48 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:48 crc kubenswrapper[4751]: I1123 03:57:48.811967 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:49 crc kubenswrapper[4751]: I1123 03:57:49.039787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"150d01da-13b4-47aa-8724-b28588d24666","Type":"ContainerStarted","Data":"3d7b6147a84722c48e46261ee7767ccba5e142f19ea238ccf2320826b142f34b"} Nov 23 03:57:49 crc kubenswrapper[4751]: I1123 03:57:49.812839 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:49 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:49 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:49 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:49 crc kubenswrapper[4751]: I1123 03:57:49.813159 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:50 crc kubenswrapper[4751]: I1123 03:57:50.811990 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:50 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:50 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:50 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:50 crc kubenswrapper[4751]: I1123 03:57:50.812082 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:50 crc kubenswrapper[4751]: I1123 03:57:50.961210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:50 crc kubenswrapper[4751]: I1123 03:57:50.966614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81fe3605-5395-4a60-ba10-3a9bad078169-metrics-certs\") pod \"network-metrics-daemon-c5nsl\" (UID: \"81fe3605-5395-4a60-ba10-3a9bad078169\") " pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:51 crc kubenswrapper[4751]: I1123 03:57:51.159556 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c5nsl" Nov 23 03:57:51 crc kubenswrapper[4751]: I1123 03:57:51.812182 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:51 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:51 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:51 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:51 crc kubenswrapper[4751]: I1123 03:57:51.812260 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:52 crc kubenswrapper[4751]: I1123 03:57:52.812620 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:52 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:52 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:52 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:52 crc kubenswrapper[4751]: I1123 03:57:52.812681 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:53 crc kubenswrapper[4751]: I1123 03:57:53.065501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"150d01da-13b4-47aa-8724-b28588d24666","Type":"ContainerStarted","Data":"e5a07dc5d818b680b7e93181d475cd9cf8c2768a0f0dc68e80590e11b70ded0a"} Nov 23 03:57:53 crc kubenswrapper[4751]: I1123 03:57:53.088308 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=6.088288764 podStartE2EDuration="6.088288764s" podCreationTimestamp="2025-11-23 03:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:57:53.078507339 +0000 UTC m=+169.272178698" watchObservedRunningTime="2025-11-23 03:57:53.088288764 +0000 UTC m=+169.281960123" Nov 23 03:57:53 crc kubenswrapper[4751]: I1123 03:57:53.812161 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:53 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:53 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:53 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:53 crc kubenswrapper[4751]: I1123 03:57:53.812474 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:54 crc kubenswrapper[4751]: I1123 03:57:54.074333 4751 generic.go:334] "Generic (PLEG): container finished" podID="150d01da-13b4-47aa-8724-b28588d24666" containerID="e5a07dc5d818b680b7e93181d475cd9cf8c2768a0f0dc68e80590e11b70ded0a" exitCode=0 Nov 23 03:57:54 crc kubenswrapper[4751]: I1123 03:57:54.074441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"150d01da-13b4-47aa-8724-b28588d24666","Type":"ContainerDied","Data":"e5a07dc5d818b680b7e93181d475cd9cf8c2768a0f0dc68e80590e11b70ded0a"} Nov 23 03:57:54 crc kubenswrapper[4751]: I1123 03:57:54.811181 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:54 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:54 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:54 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:54 crc kubenswrapper[4751]: I1123 03:57:54.811236 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:55 crc kubenswrapper[4751]: I1123 03:57:55.812458 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:55 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Nov 23 03:57:55 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:55 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:55 crc kubenswrapper[4751]: I1123 03:57:55.812806 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:56 crc kubenswrapper[4751]: I1123 03:57:56.492982 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-n7llf" Nov 23 03:57:56 crc kubenswrapper[4751]: I1123 03:57:56.720130 4751 patch_prober.go:28] interesting pod/console-f9d7485db-bbrhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 23 03:57:56 crc kubenswrapper[4751]: I1123 03:57:56.720193 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bbrhw" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 23 03:57:56 crc kubenswrapper[4751]: I1123 03:57:56.812010 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5lk47 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 03:57:56 crc kubenswrapper[4751]: [+]has-synced ok Nov 23 03:57:56 crc kubenswrapper[4751]: [+]process-running ok Nov 23 03:57:56 crc kubenswrapper[4751]: healthz check failed Nov 23 03:57:56 crc kubenswrapper[4751]: I1123 03:57:56.812066 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5lk47" podUID="5a7bbd2e-2fd6-42d0-948d-3fe6d136e752" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.812220 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.814540 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5lk47" Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.877583 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.959702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir\") pod \"150d01da-13b4-47aa-8724-b28588d24666\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.959801 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access\") pod \"150d01da-13b4-47aa-8724-b28588d24666\" (UID: \"150d01da-13b4-47aa-8724-b28588d24666\") " Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.960469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "150d01da-13b4-47aa-8724-b28588d24666" (UID: "150d01da-13b4-47aa-8724-b28588d24666"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 03:57:57 crc kubenswrapper[4751]: I1123 03:57:57.968527 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "150d01da-13b4-47aa-8724-b28588d24666" (UID: "150d01da-13b4-47aa-8724-b28588d24666"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:57:58 crc kubenswrapper[4751]: I1123 03:57:58.061156 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/150d01da-13b4-47aa-8724-b28588d24666-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:57:58 crc kubenswrapper[4751]: I1123 03:57:58.061193 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/150d01da-13b4-47aa-8724-b28588d24666-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 03:57:58 crc kubenswrapper[4751]: I1123 03:57:58.102124 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"150d01da-13b4-47aa-8724-b28588d24666","Type":"ContainerDied","Data":"3d7b6147a84722c48e46261ee7767ccba5e142f19ea238ccf2320826b142f34b"} Nov 23 03:57:58 crc kubenswrapper[4751]: I1123 03:57:58.102203 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d7b6147a84722c48e46261ee7767ccba5e142f19ea238ccf2320826b142f34b" Nov 23 03:57:58 crc kubenswrapper[4751]: I1123 03:57:58.102149 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 03:58:03 crc kubenswrapper[4751]: I1123 03:58:03.122253 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.496494 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.496696 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9qh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sfnmz_openshift-marketplace(5cbb52a7-4f63-42d4-9958-00ff3dc1c242): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.497945 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sfnmz" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.550864 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.550994 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smtfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d84pb_openshift-marketplace(74455a48-b084-43b7-9281-c31fd2267f1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.551813 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.551897 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98vdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rv46s_openshift-marketplace(4e5fb7c1-6564-43b1-9013-04675d027cea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.552917 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d84pb" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" Nov 23 03:58:04 crc kubenswrapper[4751]: E1123 03:58:04.552994 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rv46s" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" Nov 23 03:58:06 crc kubenswrapper[4751]: I1123 03:58:06.724475 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:58:06 crc kubenswrapper[4751]: I1123 03:58:06.727981 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 03:58:07 crc kubenswrapper[4751]: E1123 03:58:07.352620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sfnmz" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" Nov 23 03:58:07 crc kubenswrapper[4751]: E1123 03:58:07.353083 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rv46s" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" Nov 23 03:58:07 crc kubenswrapper[4751]: E1123 03:58:07.379123 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d84pb" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" Nov 23 03:58:07 crc kubenswrapper[4751]: I1123 03:58:07.796439 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 03:58:07 crc kubenswrapper[4751]: I1123 03:58:07.805900 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c5nsl"] Nov 23 03:58:07 crc kubenswrapper[4751]: W1123 03:58:07.841987 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81fe3605_5395_4a60_ba10_3a9bad078169.slice/crio-cf1964c49719c35f6b7531a8df7449c7a2e2d54afc5b70e7efd54aed036aaefd WatchSource:0}: Error finding container cf1964c49719c35f6b7531a8df7449c7a2e2d54afc5b70e7efd54aed036aaefd: Status 404 returned error can't find the container with id cf1964c49719c35f6b7531a8df7449c7a2e2d54afc5b70e7efd54aed036aaefd Nov 23 03:58:07 crc kubenswrapper[4751]: W1123 03:58:07.883076 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod65b011b6_0ad3_48a7_a101_77196ada1e92.slice/crio-6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f WatchSource:0}: Error finding container 6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f: Status 404 returned error can't find the container with id 6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.115206 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.115612 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.158399 4751 generic.go:334] "Generic (PLEG): container finished" podID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerID="20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab" exitCode=0 Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.158480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerDied","Data":"20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.172269 4751 generic.go:334] "Generic (PLEG): container finished" podID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerID="fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53" exitCode=0 Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.172336 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerDied","Data":"fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.198123 4751 generic.go:334] "Generic (PLEG): container finished" podID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerID="b10d7bf80badbd74f32b7814031b3a50d66c11ad922ace1235611f2bbcf5aef5" exitCode=0 Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.198190 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerDied","Data":"b10d7bf80badbd74f32b7814031b3a50d66c11ad922ace1235611f2bbcf5aef5"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.202589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerStarted","Data":"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.206401 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65b011b6-0ad3-48a7-a101-77196ada1e92","Type":"ContainerStarted","Data":"6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.209519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" event={"ID":"81fe3605-5395-4a60-ba10-3a9bad078169","Type":"ContainerStarted","Data":"94ea0c7be7b300b4c67093470e04f4ea9cf6e5502bbd89903fc74495fd7226d4"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.209554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" event={"ID":"81fe3605-5395-4a60-ba10-3a9bad078169","Type":"ContainerStarted","Data":"cf1964c49719c35f6b7531a8df7449c7a2e2d54afc5b70e7efd54aed036aaefd"} Nov 23 03:58:08 crc kubenswrapper[4751]: I1123 03:58:08.211680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerStarted","Data":"afec23a827f4a97cbd9fb50d34ec6cb1655fb243555c50912b16ce95ed3370b7"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.219389 4751 generic.go:334] "Generic (PLEG): container finished" podID="65b011b6-0ad3-48a7-a101-77196ada1e92" containerID="3b9bb871b299628d1acd6c200c8fd4323b4a2c1868d24b413f9331bbca9b5509" exitCode=0 Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.220487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65b011b6-0ad3-48a7-a101-77196ada1e92","Type":"ContainerDied","Data":"3b9bb871b299628d1acd6c200c8fd4323b4a2c1868d24b413f9331bbca9b5509"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.241861 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c5nsl" event={"ID":"81fe3605-5395-4a60-ba10-3a9bad078169","Type":"ContainerStarted","Data":"0365c909aa4107041d645dbb63fbfdd21fae344d01fd5ad9141153be09b9292f"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.249141 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerID="afec23a827f4a97cbd9fb50d34ec6cb1655fb243555c50912b16ce95ed3370b7" exitCode=0 Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.249266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerDied","Data":"afec23a827f4a97cbd9fb50d34ec6cb1655fb243555c50912b16ce95ed3370b7"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.249300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerStarted","Data":"821c9e63f6a3f54b2eaf85a281a7379fb657be68179d1b60b741a2d358bd768c"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.254404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerStarted","Data":"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.258759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerStarted","Data":"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.266520 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerStarted","Data":"b4ade08c44582fc3cb78e1370761f819657ccc201eeddf25092d8edc46fe2525"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.269813 4751 generic.go:334] "Generic (PLEG): container finished" podID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerID="7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396" exitCode=0 Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.269855 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerDied","Data":"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396"} Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.278758 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c5nsl" podStartSLOduration=161.278734627 podStartE2EDuration="2m41.278734627s" podCreationTimestamp="2025-11-23 03:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:58:09.270457803 +0000 UTC m=+185.464129172" watchObservedRunningTime="2025-11-23 03:58:09.278734627 +0000 UTC m=+185.472405996" Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.291704 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c8mfw" podStartSLOduration=2.640957521 podStartE2EDuration="23.291689849s" podCreationTimestamp="2025-11-23 03:57:46 +0000 UTC" firstStartedPulling="2025-11-23 03:57:48.040981211 +0000 UTC m=+164.234652570" lastFinishedPulling="2025-11-23 03:58:08.691713539 +0000 UTC m=+184.885384898" observedRunningTime="2025-11-23 03:58:09.288168854 +0000 UTC m=+185.481840203" watchObservedRunningTime="2025-11-23 03:58:09.291689849 +0000 UTC m=+185.485361208" Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.320981 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zg5t" podStartSLOduration=2.582258361 podStartE2EDuration="27.320962274s" podCreationTimestamp="2025-11-23 03:57:42 +0000 UTC" firstStartedPulling="2025-11-23 03:57:43.922658712 +0000 UTC m=+160.116330071" lastFinishedPulling="2025-11-23 03:58:08.661362625 +0000 UTC m=+184.855033984" observedRunningTime="2025-11-23 03:58:09.316098212 +0000 UTC m=+185.509769591" watchObservedRunningTime="2025-11-23 03:58:09.320962274 +0000 UTC m=+185.514633633" Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.392888 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbmgp" podStartSLOduration=2.7521894700000002 podStartE2EDuration="24.392871146s" podCreationTimestamp="2025-11-23 03:57:45 +0000 UTC" firstStartedPulling="2025-11-23 03:57:46.988933507 +0000 UTC m=+163.182604866" lastFinishedPulling="2025-11-23 03:58:08.629615173 +0000 UTC m=+184.823286542" observedRunningTime="2025-11-23 03:58:09.392245539 +0000 UTC m=+185.585916918" watchObservedRunningTime="2025-11-23 03:58:09.392871146 +0000 UTC m=+185.586542505" Nov 23 03:58:09 crc kubenswrapper[4751]: I1123 03:58:09.413941 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwj8h" podStartSLOduration=2.82588523 podStartE2EDuration="25.413920758s" podCreationTimestamp="2025-11-23 03:57:44 +0000 UTC" firstStartedPulling="2025-11-23 03:57:46.006375409 +0000 UTC m=+162.200046768" lastFinishedPulling="2025-11-23 03:58:08.594410937 +0000 UTC m=+184.788082296" observedRunningTime="2025-11-23 03:58:09.410242058 +0000 UTC m=+185.603913427" watchObservedRunningTime="2025-11-23 03:58:09.413920758 +0000 UTC m=+185.607592127" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.276723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerStarted","Data":"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31"} Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.295056 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ngtqn" podStartSLOduration=2.658466927 podStartE2EDuration="24.295036402s" podCreationTimestamp="2025-11-23 03:57:46 +0000 UTC" firstStartedPulling="2025-11-23 03:57:48.04094442 +0000 UTC m=+164.234615779" lastFinishedPulling="2025-11-23 03:58:09.677513875 +0000 UTC m=+185.871185254" observedRunningTime="2025-11-23 03:58:10.293520981 +0000 UTC m=+186.487192340" watchObservedRunningTime="2025-11-23 03:58:10.295036402 +0000 UTC m=+186.488707761" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.650083 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.767438 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir\") pod \"65b011b6-0ad3-48a7-a101-77196ada1e92\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.767534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access\") pod \"65b011b6-0ad3-48a7-a101-77196ada1e92\" (UID: \"65b011b6-0ad3-48a7-a101-77196ada1e92\") " Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.768617 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65b011b6-0ad3-48a7-a101-77196ada1e92" (UID: "65b011b6-0ad3-48a7-a101-77196ada1e92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.775814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65b011b6-0ad3-48a7-a101-77196ada1e92" (UID: "65b011b6-0ad3-48a7-a101-77196ada1e92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.868797 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65b011b6-0ad3-48a7-a101-77196ada1e92-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:10 crc kubenswrapper[4751]: I1123 03:58:10.868830 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65b011b6-0ad3-48a7-a101-77196ada1e92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:11 crc kubenswrapper[4751]: I1123 03:58:11.284171 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 03:58:11 crc kubenswrapper[4751]: I1123 03:58:11.287395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"65b011b6-0ad3-48a7-a101-77196ada1e92","Type":"ContainerDied","Data":"6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f"} Nov 23 03:58:11 crc kubenswrapper[4751]: I1123 03:58:11.287425 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f57f17a000167f40238d447d2244ff656bd9f96c5a7c1b238caabc5e835e26f" Nov 23 03:58:12 crc kubenswrapper[4751]: I1123 03:58:12.708867 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 03:58:13 crc kubenswrapper[4751]: I1123 03:58:13.224009 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:58:13 crc kubenswrapper[4751]: I1123 03:58:13.224301 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:58:13 crc kubenswrapper[4751]: I1123 03:58:13.471068 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:58:13 crc kubenswrapper[4751]: I1123 03:58:13.507429 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.138532 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.227249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.227324 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.285447 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.356853 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.664179 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.664413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:15 crc kubenswrapper[4751]: I1123 03:58:15.716130 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.360324 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.473229 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.473284 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.526262 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.811998 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.812060 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:16 crc kubenswrapper[4751]: I1123 03:58:16.854815 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:17 crc kubenswrapper[4751]: I1123 03:58:17.225859 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zs6nk" Nov 23 03:58:17 crc kubenswrapper[4751]: I1123 03:58:17.364200 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:58:17 crc kubenswrapper[4751]: I1123 03:58:17.366288 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:17 crc kubenswrapper[4751]: I1123 03:58:17.789274 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:58:19 crc kubenswrapper[4751]: I1123 03:58:19.185865 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:58:19 crc kubenswrapper[4751]: I1123 03:58:19.330953 4751 generic.go:334] "Generic (PLEG): container finished" podID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerID="ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847" exitCode=0 Nov 23 03:58:19 crc kubenswrapper[4751]: I1123 03:58:19.331037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerDied","Data":"ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847"} Nov 23 03:58:19 crc kubenswrapper[4751]: I1123 03:58:19.331140 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c8mfw" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="registry-server" containerID="cri-o://821c9e63f6a3f54b2eaf85a281a7379fb657be68179d1b60b741a2d358bd768c" gracePeriod=2 Nov 23 03:58:19 crc kubenswrapper[4751]: I1123 03:58:19.331378 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbmgp" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="registry-server" containerID="cri-o://b4ade08c44582fc3cb78e1370761f819657ccc201eeddf25092d8edc46fe2525" gracePeriod=2 Nov 23 03:58:20 crc kubenswrapper[4751]: I1123 03:58:20.336478 4751 generic.go:334] "Generic (PLEG): container finished" podID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerID="b4ade08c44582fc3cb78e1370761f819657ccc201eeddf25092d8edc46fe2525" exitCode=0 Nov 23 03:58:20 crc kubenswrapper[4751]: I1123 03:58:20.336553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerDied","Data":"b4ade08c44582fc3cb78e1370761f819657ccc201eeddf25092d8edc46fe2525"} Nov 23 03:58:20 crc kubenswrapper[4751]: I1123 03:58:20.338858 4751 generic.go:334] "Generic (PLEG): container finished" podID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerID="821c9e63f6a3f54b2eaf85a281a7379fb657be68179d1b60b741a2d358bd768c" exitCode=0 Nov 23 03:58:20 crc kubenswrapper[4751]: I1123 03:58:20.338926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerDied","Data":"821c9e63f6a3f54b2eaf85a281a7379fb657be68179d1b60b741a2d358bd768c"} Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.096501 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.187988 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities\") pod \"cff8845b-6eba-433d-9caf-aff92ac145b4\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.188061 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbx6\" (UniqueName: \"kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6\") pod \"cff8845b-6eba-433d-9caf-aff92ac145b4\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.188128 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content\") pod \"cff8845b-6eba-433d-9caf-aff92ac145b4\" (UID: \"cff8845b-6eba-433d-9caf-aff92ac145b4\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.188853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities" (OuterVolumeSpecName: "utilities") pod "cff8845b-6eba-433d-9caf-aff92ac145b4" (UID: "cff8845b-6eba-433d-9caf-aff92ac145b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.195115 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6" (OuterVolumeSpecName: "kube-api-access-fkbx6") pod "cff8845b-6eba-433d-9caf-aff92ac145b4" (UID: "cff8845b-6eba-433d-9caf-aff92ac145b4"). InnerVolumeSpecName "kube-api-access-fkbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.223499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cff8845b-6eba-433d-9caf-aff92ac145b4" (UID: "cff8845b-6eba-433d-9caf-aff92ac145b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.233131 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.290382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities\") pod \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.290469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtmpw\" (UniqueName: \"kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw\") pod \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.290609 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content\") pod \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\" (UID: \"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2\") " Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.291145 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities" (OuterVolumeSpecName: "utilities") pod "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" (UID: "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.291228 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkbx6\" (UniqueName: \"kubernetes.io/projected/cff8845b-6eba-433d-9caf-aff92ac145b4-kube-api-access-fkbx6\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.291275 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.291289 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cff8845b-6eba-433d-9caf-aff92ac145b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.294570 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw" (OuterVolumeSpecName: "kube-api-access-xtmpw") pod "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" (UID: "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2"). InnerVolumeSpecName "kube-api-access-xtmpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.346875 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c8mfw" event={"ID":"ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2","Type":"ContainerDied","Data":"891f6a172e1774a2a6e91a2f62668b54d4b862271e900377f3e52e94b03c7d6e"} Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.346932 4751 scope.go:117] "RemoveContainer" containerID="821c9e63f6a3f54b2eaf85a281a7379fb657be68179d1b60b741a2d358bd768c" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.346940 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c8mfw" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.350634 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmgp" event={"ID":"cff8845b-6eba-433d-9caf-aff92ac145b4","Type":"ContainerDied","Data":"353087bfe90c5041996f97a61bb79b3902b1cef195d98da7935b506dd54da518"} Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.350786 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmgp" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.383077 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.386057 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmgp"] Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.392940 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.392979 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtmpw\" (UniqueName: \"kubernetes.io/projected/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-kube-api-access-xtmpw\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:21 crc kubenswrapper[4751]: I1123 03:58:21.972251 4751 scope.go:117] "RemoveContainer" containerID="afec23a827f4a97cbd9fb50d34ec6cb1655fb243555c50912b16ce95ed3370b7" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.127855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" (UID: "ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.204229 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.273197 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.277434 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c8mfw"] Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.376706 4751 scope.go:117] "RemoveContainer" containerID="125dc2e20ec420a54be56146fe48509e615984d60b5530c178b925a5fe43aa03" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.523076 4751 scope.go:117] "RemoveContainer" containerID="b4ade08c44582fc3cb78e1370761f819657ccc201eeddf25092d8edc46fe2525" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.652340 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" path="/var/lib/kubelet/pods/ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2/volumes" Nov 23 03:58:22 crc kubenswrapper[4751]: I1123 03:58:22.653545 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" path="/var/lib/kubelet/pods/cff8845b-6eba-433d-9caf-aff92ac145b4/volumes" Nov 23 03:58:23 crc kubenswrapper[4751]: I1123 03:58:23.270853 4751 scope.go:117] "RemoveContainer" containerID="b10d7bf80badbd74f32b7814031b3a50d66c11ad922ace1235611f2bbcf5aef5" Nov 23 03:58:23 crc kubenswrapper[4751]: I1123 03:58:23.629673 4751 scope.go:117] "RemoveContainer" containerID="1c6502c2eb9c98f1847d2e25999919671ca0ef37422445f6bce266ce4911f729" Nov 23 03:58:24 crc kubenswrapper[4751]: I1123 03:58:24.372746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerStarted","Data":"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa"} Nov 23 03:58:24 crc kubenswrapper[4751]: I1123 03:58:24.376804 4751 generic.go:334] "Generic (PLEG): container finished" podID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerID="292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae" exitCode=0 Nov 23 03:58:24 crc kubenswrapper[4751]: I1123 03:58:24.376840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerDied","Data":"292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae"} Nov 23 03:58:24 crc kubenswrapper[4751]: I1123 03:58:24.408056 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rv46s" podStartSLOduration=2.724607282 podStartE2EDuration="41.408041166s" podCreationTimestamp="2025-11-23 03:57:43 +0000 UTC" firstStartedPulling="2025-11-23 03:57:44.946684306 +0000 UTC m=+161.140355665" lastFinishedPulling="2025-11-23 03:58:23.63011819 +0000 UTC m=+199.823789549" observedRunningTime="2025-11-23 03:58:24.391829526 +0000 UTC m=+200.585500885" watchObservedRunningTime="2025-11-23 03:58:24.408041166 +0000 UTC m=+200.601712525" Nov 23 03:58:25 crc kubenswrapper[4751]: I1123 03:58:25.383377 4751 generic.go:334] "Generic (PLEG): container finished" podID="74455a48-b084-43b7-9281-c31fd2267f1d" containerID="ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321" exitCode=0 Nov 23 03:58:25 crc kubenswrapper[4751]: I1123 03:58:25.383451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerDied","Data":"ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321"} Nov 23 03:58:25 crc kubenswrapper[4751]: I1123 03:58:25.384934 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerStarted","Data":"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb"} Nov 23 03:58:25 crc kubenswrapper[4751]: I1123 03:58:25.424173 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfnmz" podStartSLOduration=2.270369175 podStartE2EDuration="42.42415667s" podCreationTimestamp="2025-11-23 03:57:43 +0000 UTC" firstStartedPulling="2025-11-23 03:57:44.953912202 +0000 UTC m=+161.147583551" lastFinishedPulling="2025-11-23 03:58:25.107699687 +0000 UTC m=+201.301371046" observedRunningTime="2025-11-23 03:58:25.420228304 +0000 UTC m=+201.613899653" watchObservedRunningTime="2025-11-23 03:58:25.42415667 +0000 UTC m=+201.617828029" Nov 23 03:58:26 crc kubenswrapper[4751]: I1123 03:58:26.391433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerStarted","Data":"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f"} Nov 23 03:58:26 crc kubenswrapper[4751]: I1123 03:58:26.416427 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d84pb" podStartSLOduration=2.604380367 podStartE2EDuration="43.416411217s" podCreationTimestamp="2025-11-23 03:57:43 +0000 UTC" firstStartedPulling="2025-11-23 03:57:44.959640578 +0000 UTC m=+161.153311947" lastFinishedPulling="2025-11-23 03:58:25.771671438 +0000 UTC m=+201.965342797" observedRunningTime="2025-11-23 03:58:26.414954448 +0000 UTC m=+202.608625807" watchObservedRunningTime="2025-11-23 03:58:26.416411217 +0000 UTC m=+202.610082576" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.416476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.418001 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.458728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.619382 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.619438 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.683276 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.829200 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.829275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:33 crc kubenswrapper[4751]: I1123 03:58:33.871550 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:34 crc kubenswrapper[4751]: I1123 03:58:34.503282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:34 crc kubenswrapper[4751]: I1123 03:58:34.505186 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:34 crc kubenswrapper[4751]: I1123 03:58:34.510749 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:58:36 crc kubenswrapper[4751]: I1123 03:58:36.991692 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:58:36 crc kubenswrapper[4751]: I1123 03:58:36.992028 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rv46s" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="registry-server" containerID="cri-o://7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa" gracePeriod=2 Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.429759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.463169 4751 generic.go:334] "Generic (PLEG): container finished" podID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerID="7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa" exitCode=0 Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.463232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerDied","Data":"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa"} Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.463296 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rv46s" event={"ID":"4e5fb7c1-6564-43b1-9013-04675d027cea","Type":"ContainerDied","Data":"f87af897e5dfe8fcf2a09be3f4da881f6cedfa2914cc28b510fa00054c878b52"} Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.463294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rv46s" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.463322 4751 scope.go:117] "RemoveContainer" containerID="7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.490167 4751 scope.go:117] "RemoveContainer" containerID="ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.512554 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content\") pod \"4e5fb7c1-6564-43b1-9013-04675d027cea\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.512679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities\") pod \"4e5fb7c1-6564-43b1-9013-04675d027cea\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.512800 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98vdh\" (UniqueName: \"kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh\") pod \"4e5fb7c1-6564-43b1-9013-04675d027cea\" (UID: \"4e5fb7c1-6564-43b1-9013-04675d027cea\") " Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.514185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities" (OuterVolumeSpecName: "utilities") pod "4e5fb7c1-6564-43b1-9013-04675d027cea" (UID: "4e5fb7c1-6564-43b1-9013-04675d027cea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.518946 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh" (OuterVolumeSpecName: "kube-api-access-98vdh") pod "4e5fb7c1-6564-43b1-9013-04675d027cea" (UID: "4e5fb7c1-6564-43b1-9013-04675d027cea"). InnerVolumeSpecName "kube-api-access-98vdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.525018 4751 scope.go:117] "RemoveContainer" containerID="16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.557880 4751 scope.go:117] "RemoveContainer" containerID="7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa" Nov 23 03:58:37 crc kubenswrapper[4751]: E1123 03:58:37.558444 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa\": container with ID starting with 7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa not found: ID does not exist" containerID="7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.558494 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa"} err="failed to get container status \"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa\": rpc error: code = NotFound desc = could not find container \"7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa\": container with ID starting with 7ef6304b79928ffce9db7e3ef1a2b3cbed3b4469e8bb391d8f05da45cef92eaa not found: ID does not exist" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.558553 4751 scope.go:117] "RemoveContainer" containerID="ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847" Nov 23 03:58:37 crc kubenswrapper[4751]: E1123 03:58:37.558964 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847\": container with ID starting with ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847 not found: ID does not exist" containerID="ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.558999 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847"} err="failed to get container status \"ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847\": rpc error: code = NotFound desc = could not find container \"ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847\": container with ID starting with ef2e633de78f525dd9aab900fbe40c1b3d40e9f4abcdec4f33375f40ab9e6847 not found: ID does not exist" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.559021 4751 scope.go:117] "RemoveContainer" containerID="16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3" Nov 23 03:58:37 crc kubenswrapper[4751]: E1123 03:58:37.559388 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3\": container with ID starting with 16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3 not found: ID does not exist" containerID="16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.559444 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3"} err="failed to get container status \"16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3\": rpc error: code = NotFound desc = could not find container \"16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3\": container with ID starting with 16cfc65736a16fef3b1b28e025e4692a530fa78e2cc933d0b62f7412d2b479d3 not found: ID does not exist" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.572013 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e5fb7c1-6564-43b1-9013-04675d027cea" (UID: "4e5fb7c1-6564-43b1-9013-04675d027cea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.614598 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.614638 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e5fb7c1-6564-43b1-9013-04675d027cea-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.614648 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98vdh\" (UniqueName: \"kubernetes.io/projected/4e5fb7c1-6564-43b1-9013-04675d027cea-kube-api-access-98vdh\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.795162 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:58:37 crc kubenswrapper[4751]: I1123 03:58:37.797802 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rv46s"] Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.114904 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.115310 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.115396 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.116019 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.116074 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1" gracePeriod=600 Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.472193 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1" exitCode=0 Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.472283 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1"} Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.472378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a"} Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.651844 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" path="/var/lib/kubelet/pods/4e5fb7c1-6564-43b1-9013-04675d027cea/volumes" Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.792839 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:58:38 crc kubenswrapper[4751]: I1123 03:58:38.793542 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d84pb" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="registry-server" containerID="cri-o://a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f" gracePeriod=2 Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.231845 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.334491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities\") pod \"74455a48-b084-43b7-9281-c31fd2267f1d\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.334557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtfr\" (UniqueName: \"kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr\") pod \"74455a48-b084-43b7-9281-c31fd2267f1d\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.334621 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content\") pod \"74455a48-b084-43b7-9281-c31fd2267f1d\" (UID: \"74455a48-b084-43b7-9281-c31fd2267f1d\") " Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.335462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities" (OuterVolumeSpecName: "utilities") pod "74455a48-b084-43b7-9281-c31fd2267f1d" (UID: "74455a48-b084-43b7-9281-c31fd2267f1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.342501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr" (OuterVolumeSpecName: "kube-api-access-smtfr") pod "74455a48-b084-43b7-9281-c31fd2267f1d" (UID: "74455a48-b084-43b7-9281-c31fd2267f1d"). InnerVolumeSpecName "kube-api-access-smtfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.394693 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74455a48-b084-43b7-9281-c31fd2267f1d" (UID: "74455a48-b084-43b7-9281-c31fd2267f1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.436307 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtfr\" (UniqueName: \"kubernetes.io/projected/74455a48-b084-43b7-9281-c31fd2267f1d-kube-api-access-smtfr\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.436425 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.436457 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74455a48-b084-43b7-9281-c31fd2267f1d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.479886 4751 generic.go:334] "Generic (PLEG): container finished" podID="74455a48-b084-43b7-9281-c31fd2267f1d" containerID="a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f" exitCode=0 Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.479937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerDied","Data":"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f"} Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.479969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d84pb" event={"ID":"74455a48-b084-43b7-9281-c31fd2267f1d","Type":"ContainerDied","Data":"3a6d7bf3e050028b2bfecd2804cf002124bb337937f131424cfeeea3ad155e3d"} Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.479988 4751 scope.go:117] "RemoveContainer" containerID="a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.479983 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d84pb" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.500003 4751 scope.go:117] "RemoveContainer" containerID="ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.511331 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.515568 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d84pb"] Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.530790 4751 scope.go:117] "RemoveContainer" containerID="8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.549368 4751 scope.go:117] "RemoveContainer" containerID="a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f" Nov 23 03:58:39 crc kubenswrapper[4751]: E1123 03:58:39.549859 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f\": container with ID starting with a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f not found: ID does not exist" containerID="a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.550000 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f"} err="failed to get container status \"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f\": rpc error: code = NotFound desc = could not find container \"a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f\": container with ID starting with a23f82f5f745d2d7e3dbfb56418ad5103a12c07a58e8a09181b570f51d32382f not found: ID does not exist" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.550047 4751 scope.go:117] "RemoveContainer" containerID="ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321" Nov 23 03:58:39 crc kubenswrapper[4751]: E1123 03:58:39.551175 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321\": container with ID starting with ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321 not found: ID does not exist" containerID="ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.551228 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321"} err="failed to get container status \"ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321\": rpc error: code = NotFound desc = could not find container \"ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321\": container with ID starting with ff9fa9ea34a66b47acf26f1f35677f0f1d747ad3253b6da98a020b4347da1321 not found: ID does not exist" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.551264 4751 scope.go:117] "RemoveContainer" containerID="8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0" Nov 23 03:58:39 crc kubenswrapper[4751]: E1123 03:58:39.551544 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0\": container with ID starting with 8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0 not found: ID does not exist" containerID="8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0" Nov 23 03:58:39 crc kubenswrapper[4751]: I1123 03:58:39.551574 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0"} err="failed to get container status \"8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0\": rpc error: code = NotFound desc = could not find container \"8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0\": container with ID starting with 8a158d4fe62dc4f9a6717a4a342d19fb33760b0048d0c9a2d7a948ade98d16a0 not found: ID does not exist" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.167414 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" podUID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" containerName="oauth-openshift" containerID="cri-o://40118f3bbd87702be6bfbdc385659760f5f3675e4258a6294401b1e98e045e57" gracePeriod=15 Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.497750 4751 generic.go:334] "Generic (PLEG): container finished" podID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" containerID="40118f3bbd87702be6bfbdc385659760f5f3675e4258a6294401b1e98e045e57" exitCode=0 Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.497808 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" event={"ID":"983f8d3e-cb51-4b5d-b11b-d28c27a334f0","Type":"ContainerDied","Data":"40118f3bbd87702be6bfbdc385659760f5f3675e4258a6294401b1e98e045e57"} Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.665076 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" path="/var/lib/kubelet/pods/74455a48-b084-43b7-9281-c31fd2267f1d/volumes" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.677467 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763803 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vttjs\" (UniqueName: \"kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.763981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764094 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764139 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764187 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764226 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764261 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764297 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca\") pod \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\" (UID: \"983f8d3e-cb51-4b5d-b11b-d28c27a334f0\") " Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764955 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.764995 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.765575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.766663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.768274 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.770584 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.770879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.771489 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.772049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs" (OuterVolumeSpecName: "kube-api-access-vttjs") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "kube-api-access-vttjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.773098 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.773468 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.773978 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.774375 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.775181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "983f8d3e-cb51-4b5d-b11b-d28c27a334f0" (UID: "983f8d3e-cb51-4b5d-b11b-d28c27a334f0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865843 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865906 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865925 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865944 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865965 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.865985 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866002 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vttjs\" (UniqueName: \"kubernetes.io/projected/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-kube-api-access-vttjs\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866019 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866039 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866057 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866075 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866094 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866113 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:40 crc kubenswrapper[4751]: I1123 03:58:40.866131 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983f8d3e-cb51-4b5d-b11b-d28c27a334f0-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 03:58:41 crc kubenswrapper[4751]: I1123 03:58:41.508906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" event={"ID":"983f8d3e-cb51-4b5d-b11b-d28c27a334f0","Type":"ContainerDied","Data":"069f13eda87fd9379c72de38c5cf021c5018f5b5ea9ae821eb2a44d579c22320"} Nov 23 03:58:41 crc kubenswrapper[4751]: I1123 03:58:41.509007 4751 scope.go:117] "RemoveContainer" containerID="40118f3bbd87702be6bfbdc385659760f5f3675e4258a6294401b1e98e045e57" Nov 23 03:58:41 crc kubenswrapper[4751]: I1123 03:58:41.510459 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rtcqm" Nov 23 03:58:41 crc kubenswrapper[4751]: I1123 03:58:41.557913 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:58:41 crc kubenswrapper[4751]: I1123 03:58:41.563853 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rtcqm"] Nov 23 03:58:42 crc kubenswrapper[4751]: I1123 03:58:42.656566 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" path="/var/lib/kubelet/pods/983f8d3e-cb51-4b5d-b11b-d28c27a334f0/volumes" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.412891 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79798c6d69-h4dtx"] Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413431 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413444 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413456 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413462 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413472 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413478 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413486 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413492 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413501 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150d01da-13b4-47aa-8724-b28588d24666" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413507 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="150d01da-13b4-47aa-8724-b28588d24666" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413516 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" containerName="oauth-openshift" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413523 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" containerName="oauth-openshift" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413533 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b011b6-0ad3-48a7-a101-77196ada1e92" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413540 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b011b6-0ad3-48a7-a101-77196ada1e92" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413550 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413557 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413570 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413577 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413590 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413598 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413607 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413613 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413622 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413628 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413636 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413642 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413650 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413656 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="extract-content" Nov 23 03:58:46 crc kubenswrapper[4751]: E1123 03:58:46.413665 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413673 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="extract-utilities" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413797 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7cbefe-fbc4-4f7a-a494-2dddff38ebd2" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413809 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="74455a48-b084-43b7-9281-c31fd2267f1d" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b011b6-0ad3-48a7-a101-77196ada1e92" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413831 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff8845b-6eba-433d-9caf-aff92ac145b4" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413843 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="983f8d3e-cb51-4b5d-b11b-d28c27a334f0" containerName="oauth-openshift" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413857 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5fb7c1-6564-43b1-9013-04675d027cea" containerName="registry-server" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.413865 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="150d01da-13b4-47aa-8724-b28588d24666" containerName="pruner" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.414547 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.417142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.417813 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.418066 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.418192 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.418464 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.418529 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.418477 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.419157 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.419160 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.421908 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.422229 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.423196 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.434059 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.435586 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441051 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441108 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-audit-policies\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d863919b-5800-4876-99bd-3e4469272e45-audit-dir\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441305 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-login\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441632 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-service-ca\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441691 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-router-certs\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441779 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-error\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441893 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.441957 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-session\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.442005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84pv\" (UniqueName: \"kubernetes.io/projected/d863919b-5800-4876-99bd-3e4469272e45-kube-api-access-v84pv\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.442069 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.450189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79798c6d69-h4dtx"] Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.458652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-service-ca\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542754 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-login\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542777 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-router-certs\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-error\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-session\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84pv\" (UniqueName: \"kubernetes.io/projected/d863919b-5800-4876-99bd-3e4469272e45-kube-api-access-v84pv\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542922 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-audit-policies\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d863919b-5800-4876-99bd-3e4469272e45-audit-dir\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.542972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.544088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.544828 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d863919b-5800-4876-99bd-3e4469272e45-audit-dir\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.545879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-audit-policies\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.545999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-service-ca\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.551738 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-login\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.552067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.552219 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.552756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-session\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.556107 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.557629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.558478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.560035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-system-router-certs\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.566420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d863919b-5800-4876-99bd-3e4469272e45-v4-0-config-user-template-error\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.575999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84pv\" (UniqueName: \"kubernetes.io/projected/d863919b-5800-4876-99bd-3e4469272e45-kube-api-access-v84pv\") pod \"oauth-openshift-79798c6d69-h4dtx\" (UID: \"d863919b-5800-4876-99bd-3e4469272e45\") " pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:46 crc kubenswrapper[4751]: I1123 03:58:46.743707 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.210581 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79798c6d69-h4dtx"] Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.577203 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" event={"ID":"d863919b-5800-4876-99bd-3e4469272e45","Type":"ContainerStarted","Data":"106ef9baf2651680969825ce9c82c678d13d9b1685b48adcc1f037b1e6f9f8a6"} Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.577572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" event={"ID":"d863919b-5800-4876-99bd-3e4469272e45","Type":"ContainerStarted","Data":"64d1d5eaa0ff183b8f4de9489580c5f2452524c46cb80180f3901dc6ea75d508"} Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.577592 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.580403 4751 patch_prober.go:28] interesting pod/oauth-openshift-79798c6d69-h4dtx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.580479 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" podUID="d863919b-5800-4876-99bd-3e4469272e45" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Nov 23 03:58:47 crc kubenswrapper[4751]: I1123 03:58:47.603818 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" podStartSLOduration=32.603793949 podStartE2EDuration="32.603793949s" podCreationTimestamp="2025-11-23 03:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:58:47.602243107 +0000 UTC m=+223.795914456" watchObservedRunningTime="2025-11-23 03:58:47.603793949 +0000 UTC m=+223.797465348" Nov 23 03:58:48 crc kubenswrapper[4751]: I1123 03:58:48.591929 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79798c6d69-h4dtx" Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.896950 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.900450 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfnmz" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="registry-server" containerID="cri-o://01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb" gracePeriod=30 Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.916469 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.916996 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zg5t" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="registry-server" containerID="cri-o://65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a" gracePeriod=30 Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.928512 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.931104 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" containerID="cri-o://85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4" gracePeriod=30 Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.949741 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.950081 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwj8h" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="registry-server" containerID="cri-o://803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452" gracePeriod=30 Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.953250 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4zhm"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.954187 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.968199 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.968620 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ngtqn" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="registry-server" containerID="cri-o://a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" gracePeriod=30 Nov 23 03:59:15 crc kubenswrapper[4751]: I1123 03:59:15.984983 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4zhm"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.015592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.015648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2q8\" (UniqueName: \"kubernetes.io/projected/77b14a1a-54d8-4706-95b6-2b94d8dffa43-kube-api-access-dx2q8\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.015767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.117183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.117497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.117527 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx2q8\" (UniqueName: \"kubernetes.io/projected/77b14a1a-54d8-4706-95b6-2b94d8dffa43-kube-api-access-dx2q8\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.118466 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.133148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77b14a1a-54d8-4706-95b6-2b94d8dffa43-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.135325 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx2q8\" (UniqueName: \"kubernetes.io/projected/77b14a1a-54d8-4706-95b6-2b94d8dffa43-kube-api-access-dx2q8\") pod \"marketplace-operator-79b997595-b4zhm\" (UID: \"77b14a1a-54d8-4706-95b6-2b94d8dffa43\") " pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.301210 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.401986 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.403231 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.403404 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.415798 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.420420 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities\") pod \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qh6\" (UniqueName: \"kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6\") pod \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421496 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtq8s\" (UniqueName: \"kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s\") pod \"37c87365-7c6f-4f74-957d-3511c274b1c0\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content\") pod \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content\") pod \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421587 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics\") pod \"37c87365-7c6f-4f74-957d-3511c274b1c0\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca\") pod \"37c87365-7c6f-4f74-957d-3511c274b1c0\" (UID: \"37c87365-7c6f-4f74-957d-3511c274b1c0\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421643 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities\") pod \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421680 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content\") pod \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\" (UID: \"5cbb52a7-4f63-42d4-9958-00ff3dc1c242\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities\") pod \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421730 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmxvd\" (UniqueName: \"kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd\") pod \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\" (UID: \"4b58dce2-ca9b-436c-a5a9-fcf1286eb473\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421757 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j5fn\" (UniqueName: \"kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn\") pod \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\" (UID: \"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.421398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities" (OuterVolumeSpecName: "utilities") pod "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" (UID: "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.422780 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "37c87365-7c6f-4f74-957d-3511c274b1c0" (UID: "37c87365-7c6f-4f74-957d-3511c274b1c0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.424409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities" (OuterVolumeSpecName: "utilities") pod "4b58dce2-ca9b-436c-a5a9-fcf1286eb473" (UID: "4b58dce2-ca9b-436c-a5a9-fcf1286eb473"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.425506 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities" (OuterVolumeSpecName: "utilities") pod "5cbb52a7-4f63-42d4-9958-00ff3dc1c242" (UID: "5cbb52a7-4f63-42d4-9958-00ff3dc1c242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.438295 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s" (OuterVolumeSpecName: "kube-api-access-vtq8s") pod "37c87365-7c6f-4f74-957d-3511c274b1c0" (UID: "37c87365-7c6f-4f74-957d-3511c274b1c0"). InnerVolumeSpecName "kube-api-access-vtq8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.440501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "37c87365-7c6f-4f74-957d-3511c274b1c0" (UID: "37c87365-7c6f-4f74-957d-3511c274b1c0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.441077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd" (OuterVolumeSpecName: "kube-api-access-bmxvd") pod "4b58dce2-ca9b-436c-a5a9-fcf1286eb473" (UID: "4b58dce2-ca9b-436c-a5a9-fcf1286eb473"). InnerVolumeSpecName "kube-api-access-bmxvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.445430 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b58dce2-ca9b-436c-a5a9-fcf1286eb473" (UID: "4b58dce2-ca9b-436c-a5a9-fcf1286eb473"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.450322 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn" (OuterVolumeSpecName: "kube-api-access-7j5fn") pod "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" (UID: "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a"). InnerVolumeSpecName "kube-api-access-7j5fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.450514 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6" (OuterVolumeSpecName: "kube-api-access-c9qh6") pod "5cbb52a7-4f63-42d4-9958-00ff3dc1c242" (UID: "5cbb52a7-4f63-42d4-9958-00ff3dc1c242"). InnerVolumeSpecName "kube-api-access-c9qh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.471939 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 is running failed: container process not found" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.472765 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 is running failed: container process not found" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.474409 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 is running failed: container process not found" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.474448 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ngtqn" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="registry-server" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.500161 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.505215 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" (UID: "10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522529 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content\") pod \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7hpk\" (UniqueName: \"kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk\") pod \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522666 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities\") pod \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\" (UID: \"1a122b67-dfa5-4660-9aec-f7f6fbad9216\") " Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522824 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522841 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmxvd\" (UniqueName: \"kubernetes.io/projected/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-kube-api-access-bmxvd\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522856 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j5fn\" (UniqueName: \"kubernetes.io/projected/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-kube-api-access-7j5fn\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522867 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522879 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qh6\" (UniqueName: \"kubernetes.io/projected/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-kube-api-access-c9qh6\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522890 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtq8s\" (UniqueName: \"kubernetes.io/projected/37c87365-7c6f-4f74-957d-3511c274b1c0-kube-api-access-vtq8s\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522901 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522912 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b58dce2-ca9b-436c-a5a9-fcf1286eb473-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522925 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522936 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37c87365-7c6f-4f74-957d-3511c274b1c0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.522946 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.524201 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities" (OuterVolumeSpecName: "utilities") pod "1a122b67-dfa5-4660-9aec-f7f6fbad9216" (UID: "1a122b67-dfa5-4660-9aec-f7f6fbad9216"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.524445 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cbb52a7-4f63-42d4-9958-00ff3dc1c242" (UID: "5cbb52a7-4f63-42d4-9958-00ff3dc1c242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.528185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk" (OuterVolumeSpecName: "kube-api-access-q7hpk") pod "1a122b67-dfa5-4660-9aec-f7f6fbad9216" (UID: "1a122b67-dfa5-4660-9aec-f7f6fbad9216"). InnerVolumeSpecName "kube-api-access-q7hpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.611439 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a122b67-dfa5-4660-9aec-f7f6fbad9216" (UID: "1a122b67-dfa5-4660-9aec-f7f6fbad9216"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.624341 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.624389 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a122b67-dfa5-4660-9aec-f7f6fbad9216-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.624400 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cbb52a7-4f63-42d4-9958-00ff3dc1c242-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.624410 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7hpk\" (UniqueName: \"kubernetes.io/projected/1a122b67-dfa5-4660-9aec-f7f6fbad9216-kube-api-access-q7hpk\") on node \"crc\" DevicePath \"\"" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.789921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b4zhm"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.791804 4751 generic.go:334] "Generic (PLEG): container finished" podID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerID="85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4" exitCode=0 Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.791844 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" event={"ID":"37c87365-7c6f-4f74-957d-3511c274b1c0","Type":"ContainerDied","Data":"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.792092 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" event={"ID":"37c87365-7c6f-4f74-957d-3511c274b1c0","Type":"ContainerDied","Data":"3aa707a92dc10e4c7b6502a49815810dc975af268296f9020124588e818e7800"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.791895 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rfmp4" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.792148 4751 scope.go:117] "RemoveContainer" containerID="85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.802512 4751 generic.go:334] "Generic (PLEG): container finished" podID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerID="803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452" exitCode=0 Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.802568 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwj8h" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.802575 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerDied","Data":"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.802644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwj8h" event={"ID":"4b58dce2-ca9b-436c-a5a9-fcf1286eb473","Type":"ContainerDied","Data":"9b4c0506fc0511f7ad605904124700717473d8169a9beade6fc410731be9b1e0"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.811907 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.813698 4751 generic.go:334] "Generic (PLEG): container finished" podID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerID="65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a" exitCode=0 Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.813758 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerDied","Data":"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.813778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zg5t" event={"ID":"10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a","Type":"ContainerDied","Data":"81bfcb5e037d6b61fa74c749a0ace797a840ff36294331620850168a5661fc08"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.813863 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zg5t" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.815802 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rfmp4"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.817195 4751 generic.go:334] "Generic (PLEG): container finished" podID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerID="01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb" exitCode=0 Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.817269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerDied","Data":"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.817301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfnmz" event={"ID":"5cbb52a7-4f63-42d4-9958-00ff3dc1c242","Type":"ContainerDied","Data":"6db6db212fd7b5ffc59eb3c0aef981a7a61493e869be3747390ba3acc9493a03"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.817400 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfnmz" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.825618 4751 generic.go:334] "Generic (PLEG): container finished" podID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" exitCode=0 Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.826656 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ngtqn" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.826819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerDied","Data":"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.826855 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.826898 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ngtqn" event={"ID":"1a122b67-dfa5-4660-9aec-f7f6fbad9216","Type":"ContainerDied","Data":"06dbf6e5e27cd422f203dacfb16f7d44dad1d6a21a7521a97cca1ffd011ffad3"} Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.830266 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwj8h"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.835577 4751 scope.go:117] "RemoveContainer" containerID="85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.835970 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4\": container with ID starting with 85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4 not found: ID does not exist" containerID="85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.836010 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4"} err="failed to get container status \"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4\": rpc error: code = NotFound desc = could not find container \"85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4\": container with ID starting with 85d5626ad98e31f3d3603145976b2bb31b20c0781af36f08001879e0c63a69c4 not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.836059 4751 scope.go:117] "RemoveContainer" containerID="803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.852682 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.858842 4751 scope.go:117] "RemoveContainer" containerID="fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.862834 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zg5t"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.867976 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.872017 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfnmz"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.875822 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.878613 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ngtqn"] Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.888064 4751 scope.go:117] "RemoveContainer" containerID="942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.904456 4751 scope.go:117] "RemoveContainer" containerID="803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.905020 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452\": container with ID starting with 803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452 not found: ID does not exist" containerID="803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.905073 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452"} err="failed to get container status \"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452\": rpc error: code = NotFound desc = could not find container \"803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452\": container with ID starting with 803ff367fd62e672be6a3760cb4e37557e5f60bd3728160ee6d8218c37faa452 not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.905106 4751 scope.go:117] "RemoveContainer" containerID="fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.905744 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53\": container with ID starting with fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53 not found: ID does not exist" containerID="fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.905783 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53"} err="failed to get container status \"fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53\": rpc error: code = NotFound desc = could not find container \"fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53\": container with ID starting with fd506f7b74f6f0d404d2aa7c87b588464478d4f79fefd4ea283e156dc146fa53 not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.905809 4751 scope.go:117] "RemoveContainer" containerID="942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.906120 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007\": container with ID starting with 942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007 not found: ID does not exist" containerID="942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.906146 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007"} err="failed to get container status \"942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007\": rpc error: code = NotFound desc = could not find container \"942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007\": container with ID starting with 942f80e9653f05a3fe8ebf4daa50afe6d247a8b43e90022725f28f821a12a007 not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.906163 4751 scope.go:117] "RemoveContainer" containerID="65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.919763 4751 scope.go:117] "RemoveContainer" containerID="20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.936284 4751 scope.go:117] "RemoveContainer" containerID="f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.959393 4751 scope.go:117] "RemoveContainer" containerID="65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.960422 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a\": container with ID starting with 65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a not found: ID does not exist" containerID="65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.960636 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a"} err="failed to get container status \"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a\": rpc error: code = NotFound desc = could not find container \"65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a\": container with ID starting with 65bd27542d1075771f2597286831189105754ff97ba6adde75c41c557eefea5a not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.960848 4751 scope.go:117] "RemoveContainer" containerID="20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.961706 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab\": container with ID starting with 20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab not found: ID does not exist" containerID="20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.961763 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab"} err="failed to get container status \"20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab\": rpc error: code = NotFound desc = could not find container \"20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab\": container with ID starting with 20bfc3b02c4786d78507c1316391716fcad1192fc5df3b672f9c834e05b084ab not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.961791 4751 scope.go:117] "RemoveContainer" containerID="f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d" Nov 23 03:59:16 crc kubenswrapper[4751]: E1123 03:59:16.962448 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d\": container with ID starting with f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d not found: ID does not exist" containerID="f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.962480 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d"} err="failed to get container status \"f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d\": rpc error: code = NotFound desc = could not find container \"f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d\": container with ID starting with f31be67eeaba09f7b6da98fc63e1bbd8ffad1a986978b68184e7ebc9d222590d not found: ID does not exist" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.962494 4751 scope.go:117] "RemoveContainer" containerID="01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb" Nov 23 03:59:16 crc kubenswrapper[4751]: I1123 03:59:16.986308 4751 scope.go:117] "RemoveContainer" containerID="292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.005579 4751 scope.go:117] "RemoveContainer" containerID="33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.018817 4751 scope.go:117] "RemoveContainer" containerID="01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.019178 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb\": container with ID starting with 01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb not found: ID does not exist" containerID="01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.019226 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb"} err="failed to get container status \"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb\": rpc error: code = NotFound desc = could not find container \"01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb\": container with ID starting with 01fd8e196823f64d07a69859695f68adc0a4efeddf5eb1ce1e592e67b46829fb not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.019247 4751 scope.go:117] "RemoveContainer" containerID="292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.019541 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae\": container with ID starting with 292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae not found: ID does not exist" containerID="292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.019582 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae"} err="failed to get container status \"292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae\": rpc error: code = NotFound desc = could not find container \"292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae\": container with ID starting with 292062851a224d2fe0a7d24fb060f3f5588e3f6f91c48375d0b0129e9b3df9ae not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.019612 4751 scope.go:117] "RemoveContainer" containerID="33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.020182 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8\": container with ID starting with 33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8 not found: ID does not exist" containerID="33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.020222 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8"} err="failed to get container status \"33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8\": rpc error: code = NotFound desc = could not find container \"33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8\": container with ID starting with 33bb5539f2aef4f59e4818f1bdd541fca4b0071ec4254c028cfee3736ac747c8 not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.020237 4751 scope.go:117] "RemoveContainer" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.037763 4751 scope.go:117] "RemoveContainer" containerID="7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.110747 4751 scope.go:117] "RemoveContainer" containerID="aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.126509 4751 scope.go:117] "RemoveContainer" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.126967 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31\": container with ID starting with a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 not found: ID does not exist" containerID="a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.127028 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31"} err="failed to get container status \"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31\": rpc error: code = NotFound desc = could not find container \"a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31\": container with ID starting with a1972b009853b9189856c5cac8efcee1e05c53c2bab4a27626e25b7ce4a2fc31 not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.127074 4751 scope.go:117] "RemoveContainer" containerID="7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.127777 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396\": container with ID starting with 7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396 not found: ID does not exist" containerID="7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.127825 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396"} err="failed to get container status \"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396\": rpc error: code = NotFound desc = could not find container \"7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396\": container with ID starting with 7e3622a99ae7ef093994cc2b2b4bad92cce2f020e49b52a9cf33a333b7fed396 not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.127855 4751 scope.go:117] "RemoveContainer" containerID="aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00" Nov 23 03:59:17 crc kubenswrapper[4751]: E1123 03:59:17.128270 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00\": container with ID starting with aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00 not found: ID does not exist" containerID="aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.128312 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00"} err="failed to get container status \"aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00\": rpc error: code = NotFound desc = could not find container \"aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00\": container with ID starting with aa03276e4ecf8905e3549f8791679920c1a7cbe12a3c5e08fd3dbda5f16ced00 not found: ID does not exist" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.835439 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" event={"ID":"77b14a1a-54d8-4706-95b6-2b94d8dffa43","Type":"ContainerStarted","Data":"1f964fd6dd3dd57f67550c3fbaa8620509a4d10489dcbc4a1e9c199174ba719c"} Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.835528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" event={"ID":"77b14a1a-54d8-4706-95b6-2b94d8dffa43","Type":"ContainerStarted","Data":"e9c7b680e6e9d020cccf95300df61797e32c3b940430fd24fa8d9ebf4f2a7cb2"} Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.835942 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.840085 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" Nov 23 03:59:17 crc kubenswrapper[4751]: I1123 03:59:17.859079 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b4zhm" podStartSLOduration=2.858971571 podStartE2EDuration="2.858971571s" podCreationTimestamp="2025-11-23 03:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 03:59:17.857030099 +0000 UTC m=+254.050701488" watchObservedRunningTime="2025-11-23 03:59:17.858971571 +0000 UTC m=+254.052642970" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112564 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mqtml"] Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112765 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112777 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112787 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112793 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112816 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112824 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112834 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112842 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112852 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112858 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112868 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112875 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112882 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112888 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112896 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112901 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112911 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112917 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112927 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112934 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="extract-content" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112942 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112948 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112957 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112963 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: E1123 03:59:18.112975 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.112980 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="extract-utilities" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113070 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113082 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113094 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113103 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" containerName="marketplace-operator" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113112 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" containerName="registry-server" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.113925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.116225 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.130450 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqtml"] Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.147313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-catalog-content\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.147487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-utilities\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.147606 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vgs\" (UniqueName: \"kubernetes.io/projected/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-kube-api-access-l6vgs\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.248496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-utilities\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.248565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6vgs\" (UniqueName: \"kubernetes.io/projected/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-kube-api-access-l6vgs\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.248592 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-catalog-content\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.249019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-catalog-content\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.249448 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-utilities\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.275494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6vgs\" (UniqueName: \"kubernetes.io/projected/373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453-kube-api-access-l6vgs\") pod \"redhat-marketplace-mqtml\" (UID: \"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453\") " pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.308007 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-swqk5"] Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.309231 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.312864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.316714 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swqk5"] Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.350138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-catalog-content\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.350454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5j64\" (UniqueName: \"kubernetes.io/projected/1e37d630-83e9-4049-9b40-b98132ab891b-kube-api-access-c5j64\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.351026 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-utilities\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.438411 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.452626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5j64\" (UniqueName: \"kubernetes.io/projected/1e37d630-83e9-4049-9b40-b98132ab891b-kube-api-access-c5j64\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.452695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-utilities\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.452762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-catalog-content\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.453266 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-catalog-content\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.453420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37d630-83e9-4049-9b40-b98132ab891b-utilities\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.475666 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5j64\" (UniqueName: \"kubernetes.io/projected/1e37d630-83e9-4049-9b40-b98132ab891b-kube-api-access-c5j64\") pod \"redhat-operators-swqk5\" (UID: \"1e37d630-83e9-4049-9b40-b98132ab891b\") " pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.636701 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.652723 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a" path="/var/lib/kubelet/pods/10f92fe6-0dcb-4ff1-b9d3-be74e9b7866a/volumes" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.653427 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a122b67-dfa5-4660-9aec-f7f6fbad9216" path="/var/lib/kubelet/pods/1a122b67-dfa5-4660-9aec-f7f6fbad9216/volumes" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.654282 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c87365-7c6f-4f74-957d-3511c274b1c0" path="/var/lib/kubelet/pods/37c87365-7c6f-4f74-957d-3511c274b1c0/volumes" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.655294 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b58dce2-ca9b-436c-a5a9-fcf1286eb473" path="/var/lib/kubelet/pods/4b58dce2-ca9b-436c-a5a9-fcf1286eb473/volumes" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.655892 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cbb52a7-4f63-42d4-9958-00ff3dc1c242" path="/var/lib/kubelet/pods/5cbb52a7-4f63-42d4-9958-00ff3dc1c242/volumes" Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.826065 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mqtml"] Nov 23 03:59:18 crc kubenswrapper[4751]: W1123 03:59:18.835468 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373f0c7f_d0d9_49c1_9f9e_6fdc3e6c7453.slice/crio-6d22d842cff9e7191f725d50445b5addeac2584995a3e797347e60635e8abea6 WatchSource:0}: Error finding container 6d22d842cff9e7191f725d50445b5addeac2584995a3e797347e60635e8abea6: Status 404 returned error can't find the container with id 6d22d842cff9e7191f725d50445b5addeac2584995a3e797347e60635e8abea6 Nov 23 03:59:18 crc kubenswrapper[4751]: I1123 03:59:18.846274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqtml" event={"ID":"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453","Type":"ContainerStarted","Data":"6d22d842cff9e7191f725d50445b5addeac2584995a3e797347e60635e8abea6"} Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.022387 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-swqk5"] Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.855731 4751 generic.go:334] "Generic (PLEG): container finished" podID="373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453" containerID="568c7fa0cb35bb9b14193c55957cb616892e7b5a6006084d9c76175e66a4a9d5" exitCode=0 Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.855812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqtml" event={"ID":"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453","Type":"ContainerDied","Data":"568c7fa0cb35bb9b14193c55957cb616892e7b5a6006084d9c76175e66a4a9d5"} Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.860974 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e37d630-83e9-4049-9b40-b98132ab891b" containerID="7cc0ebf3e3e2dc1eb520cfb35ce54ea6142fbc5c7724f39a5d84c5a4bd86faf7" exitCode=0 Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.861472 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swqk5" event={"ID":"1e37d630-83e9-4049-9b40-b98132ab891b","Type":"ContainerDied","Data":"7cc0ebf3e3e2dc1eb520cfb35ce54ea6142fbc5c7724f39a5d84c5a4bd86faf7"} Nov 23 03:59:19 crc kubenswrapper[4751]: I1123 03:59:19.861497 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swqk5" event={"ID":"1e37d630-83e9-4049-9b40-b98132ab891b","Type":"ContainerStarted","Data":"c3e8cd9a7f91d70a563fb9dcdf14da915ba0ed122956e249dc3d9c40a05daa4d"} Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.515171 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lzcgv"] Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.518655 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzcgv"] Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.518756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.521833 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.584093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg9wz\" (UniqueName: \"kubernetes.io/projected/79866610-f6cc-4403-822f-6e76628ed0ad-kube-api-access-bg9wz\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.584160 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-catalog-content\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.584197 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-utilities\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.685034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg9wz\" (UniqueName: \"kubernetes.io/projected/79866610-f6cc-4403-822f-6e76628ed0ad-kube-api-access-bg9wz\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.685943 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-catalog-content\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.686010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-utilities\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.686030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-catalog-content\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.686321 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79866610-f6cc-4403-822f-6e76628ed0ad-utilities\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.715676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg9wz\" (UniqueName: \"kubernetes.io/projected/79866610-f6cc-4403-822f-6e76628ed0ad-kube-api-access-bg9wz\") pod \"certified-operators-lzcgv\" (UID: \"79866610-f6cc-4403-822f-6e76628ed0ad\") " pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.720179 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kfgvt"] Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.722065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.722622 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfgvt"] Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.725237 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.788087 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-catalog-content\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.788306 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6z6\" (UniqueName: \"kubernetes.io/projected/1f021228-7e3a-4286-a412-59792b2938ce-kube-api-access-zc6z6\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.788475 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-utilities\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.839786 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.866454 4751 generic.go:334] "Generic (PLEG): container finished" podID="373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453" containerID="92ede67a69bba1f07aedf0282b2fbc4950060db83ff073410eba35a6e9b89701" exitCode=0 Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.866518 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqtml" event={"ID":"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453","Type":"ContainerDied","Data":"92ede67a69bba1f07aedf0282b2fbc4950060db83ff073410eba35a6e9b89701"} Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.872151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swqk5" event={"ID":"1e37d630-83e9-4049-9b40-b98132ab891b","Type":"ContainerStarted","Data":"95a403adaee6c76b2090cb8eca201ef198bdf30d3e8c8f68c24f37549f618000"} Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.893203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-utilities\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.893843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-utilities\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.894027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-catalog-content\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.894833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6z6\" (UniqueName: \"kubernetes.io/projected/1f021228-7e3a-4286-a412-59792b2938ce-kube-api-access-zc6z6\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.894670 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f021228-7e3a-4286-a412-59792b2938ce-catalog-content\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:20 crc kubenswrapper[4751]: I1123 03:59:20.912186 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6z6\" (UniqueName: \"kubernetes.io/projected/1f021228-7e3a-4286-a412-59792b2938ce-kube-api-access-zc6z6\") pod \"community-operators-kfgvt\" (UID: \"1f021228-7e3a-4286-a412-59792b2938ce\") " pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.056864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.220114 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzcgv"] Nov 23 03:59:21 crc kubenswrapper[4751]: W1123 03:59:21.243718 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79866610_f6cc_4403_822f_6e76628ed0ad.slice/crio-ecde1c7e83b7983963bfebee4e3913d7415046f3a889dc5bf431ae5d8c040c5b WatchSource:0}: Error finding container ecde1c7e83b7983963bfebee4e3913d7415046f3a889dc5bf431ae5d8c040c5b: Status 404 returned error can't find the container with id ecde1c7e83b7983963bfebee4e3913d7415046f3a889dc5bf431ae5d8c040c5b Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.422624 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfgvt"] Nov 23 03:59:21 crc kubenswrapper[4751]: W1123 03:59:21.453735 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f021228_7e3a_4286_a412_59792b2938ce.slice/crio-8e38b3c109fb2784a1d499ce2c811adc06d87e8dfb82192dad44b3f1acaaf237 WatchSource:0}: Error finding container 8e38b3c109fb2784a1d499ce2c811adc06d87e8dfb82192dad44b3f1acaaf237: Status 404 returned error can't find the container with id 8e38b3c109fb2784a1d499ce2c811adc06d87e8dfb82192dad44b3f1acaaf237 Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.882784 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e37d630-83e9-4049-9b40-b98132ab891b" containerID="95a403adaee6c76b2090cb8eca201ef198bdf30d3e8c8f68c24f37549f618000" exitCode=0 Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.883102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swqk5" event={"ID":"1e37d630-83e9-4049-9b40-b98132ab891b","Type":"ContainerDied","Data":"95a403adaee6c76b2090cb8eca201ef198bdf30d3e8c8f68c24f37549f618000"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.896571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mqtml" event={"ID":"373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453","Type":"ContainerStarted","Data":"2027c7f3f3258954ce9e060647637b167c1637caa8a10fa927fbefc26c6b9aaa"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.898507 4751 generic.go:334] "Generic (PLEG): container finished" podID="1f021228-7e3a-4286-a412-59792b2938ce" containerID="09cad2f47e7e9b71081f1ce331d0704e0e7073714672a1d58c0e67c5ab0254ef" exitCode=0 Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.898606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfgvt" event={"ID":"1f021228-7e3a-4286-a412-59792b2938ce","Type":"ContainerDied","Data":"09cad2f47e7e9b71081f1ce331d0704e0e7073714672a1d58c0e67c5ab0254ef"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.898634 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfgvt" event={"ID":"1f021228-7e3a-4286-a412-59792b2938ce","Type":"ContainerStarted","Data":"8e38b3c109fb2784a1d499ce2c811adc06d87e8dfb82192dad44b3f1acaaf237"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.900355 4751 generic.go:334] "Generic (PLEG): container finished" podID="79866610-f6cc-4403-822f-6e76628ed0ad" containerID="63d6ad34b8173ec9850f380a79bb52289a5bfd5705a8136a64bd2ebaca17e405" exitCode=0 Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.900397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzcgv" event={"ID":"79866610-f6cc-4403-822f-6e76628ed0ad","Type":"ContainerDied","Data":"63d6ad34b8173ec9850f380a79bb52289a5bfd5705a8136a64bd2ebaca17e405"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.900443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzcgv" event={"ID":"79866610-f6cc-4403-822f-6e76628ed0ad","Type":"ContainerStarted","Data":"ecde1c7e83b7983963bfebee4e3913d7415046f3a889dc5bf431ae5d8c040c5b"} Nov 23 03:59:21 crc kubenswrapper[4751]: I1123 03:59:21.923057 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mqtml" podStartSLOduration=2.491156123 podStartE2EDuration="3.923039747s" podCreationTimestamp="2025-11-23 03:59:18 +0000 UTC" firstStartedPulling="2025-11-23 03:59:19.85953048 +0000 UTC m=+256.053201879" lastFinishedPulling="2025-11-23 03:59:21.291414144 +0000 UTC m=+257.485085503" observedRunningTime="2025-11-23 03:59:21.922117912 +0000 UTC m=+258.115789301" watchObservedRunningTime="2025-11-23 03:59:21.923039747 +0000 UTC m=+258.116711106" Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.907022 4751 generic.go:334] "Generic (PLEG): container finished" podID="1f021228-7e3a-4286-a412-59792b2938ce" containerID="2d9e245eab5ae7ff2bd4507b3b86baf39ff5282fcb6cca4a2ccf4fa4220a7e5c" exitCode=0 Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.907114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfgvt" event={"ID":"1f021228-7e3a-4286-a412-59792b2938ce","Type":"ContainerDied","Data":"2d9e245eab5ae7ff2bd4507b3b86baf39ff5282fcb6cca4a2ccf4fa4220a7e5c"} Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.911422 4751 generic.go:334] "Generic (PLEG): container finished" podID="79866610-f6cc-4403-822f-6e76628ed0ad" containerID="afe083c47849c0d6b1cb6c9dd40badfb248e8b0a11d65e753fc1c85d6b2f26ef" exitCode=0 Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.911478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzcgv" event={"ID":"79866610-f6cc-4403-822f-6e76628ed0ad","Type":"ContainerDied","Data":"afe083c47849c0d6b1cb6c9dd40badfb248e8b0a11d65e753fc1c85d6b2f26ef"} Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.914119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-swqk5" event={"ID":"1e37d630-83e9-4049-9b40-b98132ab891b","Type":"ContainerStarted","Data":"ffa87ccb14d3b3153b439d0e2bf9a23969e40c56a344194001a94bf3be0f81d5"} Nov 23 03:59:22 crc kubenswrapper[4751]: I1123 03:59:22.948436 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-swqk5" podStartSLOduration=2.531052866 podStartE2EDuration="4.948416853s" podCreationTimestamp="2025-11-23 03:59:18 +0000 UTC" firstStartedPulling="2025-11-23 03:59:19.864558916 +0000 UTC m=+256.058230315" lastFinishedPulling="2025-11-23 03:59:22.281922943 +0000 UTC m=+258.475594302" observedRunningTime="2025-11-23 03:59:22.943002736 +0000 UTC m=+259.136674095" watchObservedRunningTime="2025-11-23 03:59:22.948416853 +0000 UTC m=+259.142088202" Nov 23 03:59:23 crc kubenswrapper[4751]: I1123 03:59:23.922479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfgvt" event={"ID":"1f021228-7e3a-4286-a412-59792b2938ce","Type":"ContainerStarted","Data":"7dab5e2ef64971cfcda9ab66dcda16b5b6426a106d2aff7d4f9259004a77513d"} Nov 23 03:59:23 crc kubenswrapper[4751]: I1123 03:59:23.925446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzcgv" event={"ID":"79866610-f6cc-4403-822f-6e76628ed0ad","Type":"ContainerStarted","Data":"c3fc44eda81a474278f4689707cbd7d4df79777887d73cd6679cc7b98f03476e"} Nov 23 03:59:23 crc kubenswrapper[4751]: I1123 03:59:23.949580 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kfgvt" podStartSLOduration=2.538162972 podStartE2EDuration="3.9495533s" podCreationTimestamp="2025-11-23 03:59:20 +0000 UTC" firstStartedPulling="2025-11-23 03:59:21.900254089 +0000 UTC m=+258.093925448" lastFinishedPulling="2025-11-23 03:59:23.311644427 +0000 UTC m=+259.505315776" observedRunningTime="2025-11-23 03:59:23.94810168 +0000 UTC m=+260.141773079" watchObservedRunningTime="2025-11-23 03:59:23.9495533 +0000 UTC m=+260.143224699" Nov 23 03:59:23 crc kubenswrapper[4751]: I1123 03:59:23.980481 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lzcgv" podStartSLOduration=2.482995043 podStartE2EDuration="3.980453379s" podCreationTimestamp="2025-11-23 03:59:20 +0000 UTC" firstStartedPulling="2025-11-23 03:59:21.901230825 +0000 UTC m=+258.094902184" lastFinishedPulling="2025-11-23 03:59:23.398689171 +0000 UTC m=+259.592360520" observedRunningTime="2025-11-23 03:59:23.977956181 +0000 UTC m=+260.171627620" watchObservedRunningTime="2025-11-23 03:59:23.980453379 +0000 UTC m=+260.174124768" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.439223 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.439666 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.482957 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.637144 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.637243 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:28 crc kubenswrapper[4751]: I1123 03:59:28.691411 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:29 crc kubenswrapper[4751]: I1123 03:59:29.010929 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-swqk5" Nov 23 03:59:29 crc kubenswrapper[4751]: I1123 03:59:29.019498 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mqtml" Nov 23 03:59:30 crc kubenswrapper[4751]: I1123 03:59:30.840248 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:30 crc kubenswrapper[4751]: I1123 03:59:30.840594 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:30 crc kubenswrapper[4751]: I1123 03:59:30.908808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:31 crc kubenswrapper[4751]: I1123 03:59:31.024354 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lzcgv" Nov 23 03:59:31 crc kubenswrapper[4751]: I1123 03:59:31.058051 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:31 crc kubenswrapper[4751]: I1123 03:59:31.059951 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:31 crc kubenswrapper[4751]: I1123 03:59:31.113526 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 03:59:32 crc kubenswrapper[4751]: I1123 03:59:32.019394 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kfgvt" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.143912 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl"] Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.145209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.147409 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.147571 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.154255 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl"] Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.332026 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.332115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lnv\" (UniqueName: \"kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.332154 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.433831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lnv\" (UniqueName: \"kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.433945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.434068 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.435973 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.442455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.464704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lnv\" (UniqueName: \"kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv\") pod \"collect-profiles-29397840-vqxkl\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.763529 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:00 crc kubenswrapper[4751]: I1123 04:00:00.980461 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl"] Nov 23 04:00:01 crc kubenswrapper[4751]: I1123 04:00:01.163025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" event={"ID":"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef","Type":"ContainerStarted","Data":"22d5fd62dbfd264ae142854e7fe24446be7cd2aa856c6c50f5b9082fdab75bb6"} Nov 23 04:00:01 crc kubenswrapper[4751]: I1123 04:00:01.163065 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" event={"ID":"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef","Type":"ContainerStarted","Data":"986636503b90bdfe001c8406f6ad9486be2337a29b7b418593ba6c9433a9bb7d"} Nov 23 04:00:01 crc kubenswrapper[4751]: I1123 04:00:01.181061 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" podStartSLOduration=1.181043299 podStartE2EDuration="1.181043299s" podCreationTimestamp="2025-11-23 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:00:01.180787722 +0000 UTC m=+297.374459101" watchObservedRunningTime="2025-11-23 04:00:01.181043299 +0000 UTC m=+297.374714658" Nov 23 04:00:02 crc kubenswrapper[4751]: I1123 04:00:02.186616 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" containerID="22d5fd62dbfd264ae142854e7fe24446be7cd2aa856c6c50f5b9082fdab75bb6" exitCode=0 Nov 23 04:00:02 crc kubenswrapper[4751]: I1123 04:00:02.186771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" event={"ID":"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef","Type":"ContainerDied","Data":"22d5fd62dbfd264ae142854e7fe24446be7cd2aa856c6c50f5b9082fdab75bb6"} Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.457033 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.581619 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume\") pod \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.581991 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume\") pod \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.582012 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6lnv\" (UniqueName: \"kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv\") pod \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\" (UID: \"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef\") " Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.583683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" (UID: "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.588565 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" (UID: "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.589157 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv" (OuterVolumeSpecName: "kube-api-access-f6lnv") pod "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" (UID: "e8ea08a3-ec8d-4f23-8448-de8c2266e1ef"). InnerVolumeSpecName "kube-api-access-f6lnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.684248 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.684294 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:00:03 crc kubenswrapper[4751]: I1123 04:00:03.684305 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6lnv\" (UniqueName: \"kubernetes.io/projected/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef-kube-api-access-f6lnv\") on node \"crc\" DevicePath \"\"" Nov 23 04:00:04 crc kubenswrapper[4751]: I1123 04:00:04.202934 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" event={"ID":"e8ea08a3-ec8d-4f23-8448-de8c2266e1ef","Type":"ContainerDied","Data":"986636503b90bdfe001c8406f6ad9486be2337a29b7b418593ba6c9433a9bb7d"} Nov 23 04:00:04 crc kubenswrapper[4751]: I1123 04:00:04.202995 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986636503b90bdfe001c8406f6ad9486be2337a29b7b418593ba6c9433a9bb7d" Nov 23 04:00:04 crc kubenswrapper[4751]: I1123 04:00:04.203021 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl" Nov 23 04:00:04 crc kubenswrapper[4751]: I1123 04:00:04.428032 4751 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 23 04:00:38 crc kubenswrapper[4751]: I1123 04:00:38.114843 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:00:38 crc kubenswrapper[4751]: I1123 04:00:38.115671 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:01:08 crc kubenswrapper[4751]: I1123 04:01:08.114965 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:01:08 crc kubenswrapper[4751]: I1123 04:01:08.115709 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.114843 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.115485 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.115542 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.116251 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.116430 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a" gracePeriod=600 Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.859388 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a" exitCode=0 Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.860020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a"} Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.860061 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e"} Nov 23 04:01:38 crc kubenswrapper[4751]: I1123 04:01:38.860090 4751 scope.go:117] "RemoveContainer" containerID="c8e7236c191131bb68033afbb7996299e4672141b36bcf029755efe84a999bd1" Nov 23 04:02:52 crc kubenswrapper[4751]: I1123 04:02:52.977198 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xcsr6"] Nov 23 04:02:52 crc kubenswrapper[4751]: E1123 04:02:52.977957 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" containerName="collect-profiles" Nov 23 04:02:52 crc kubenswrapper[4751]: I1123 04:02:52.977974 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" containerName="collect-profiles" Nov 23 04:02:52 crc kubenswrapper[4751]: I1123 04:02:52.978098 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" containerName="collect-profiles" Nov 23 04:02:52 crc kubenswrapper[4751]: I1123 04:02:52.978596 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:52 crc kubenswrapper[4751]: I1123 04:02:52.993798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xcsr6"] Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.105945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-certificates\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106042 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6192e92e-34f3-4526-b7f9-0048dab93f4f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106118 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6192e92e-34f3-4526-b7f9-0048dab93f4f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106209 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-tls\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106340 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ct7z\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-kube-api-access-9ct7z\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106533 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-trusted-ca\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.106587 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-bound-sa-token\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.137416 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.207847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-tls\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.207937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ct7z\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-kube-api-access-9ct7z\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.207981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-trusted-ca\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.208001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-bound-sa-token\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.208048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-certificates\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.208069 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6192e92e-34f3-4526-b7f9-0048dab93f4f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.208093 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6192e92e-34f3-4526-b7f9-0048dab93f4f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.209745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6192e92e-34f3-4526-b7f9-0048dab93f4f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.212282 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-trusted-ca\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.212782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-certificates\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.219116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6192e92e-34f3-4526-b7f9-0048dab93f4f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.221394 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-registry-tls\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.225837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-bound-sa-token\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.236138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ct7z\" (UniqueName: \"kubernetes.io/projected/6192e92e-34f3-4526-b7f9-0048dab93f4f-kube-api-access-9ct7z\") pod \"image-registry-66df7c8f76-xcsr6\" (UID: \"6192e92e-34f3-4526-b7f9-0048dab93f4f\") " pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.295973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:53 crc kubenswrapper[4751]: I1123 04:02:53.605506 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xcsr6"] Nov 23 04:02:54 crc kubenswrapper[4751]: I1123 04:02:54.393852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" event={"ID":"6192e92e-34f3-4526-b7f9-0048dab93f4f","Type":"ContainerStarted","Data":"eb7d7f6bc1f19b75eae66e628b12ca6f0749be1bb691a0f0c5f8ba2e7d1d2620"} Nov 23 04:02:54 crc kubenswrapper[4751]: I1123 04:02:54.394401 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:02:54 crc kubenswrapper[4751]: I1123 04:02:54.394426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" event={"ID":"6192e92e-34f3-4526-b7f9-0048dab93f4f","Type":"ContainerStarted","Data":"91ca59fab6313ccb2d128583e7b21b0bde671d0b786856018998137ffa267970"} Nov 23 04:02:54 crc kubenswrapper[4751]: I1123 04:02:54.428860 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" podStartSLOduration=2.428833059 podStartE2EDuration="2.428833059s" podCreationTimestamp="2025-11-23 04:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:02:54.421126447 +0000 UTC m=+470.614797866" watchObservedRunningTime="2025-11-23 04:02:54.428833059 +0000 UTC m=+470.622504448" Nov 23 04:03:13 crc kubenswrapper[4751]: I1123 04:03:13.302603 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xcsr6" Nov 23 04:03:13 crc kubenswrapper[4751]: I1123 04:03:13.370971 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.114687 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.115329 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.421454 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" podUID="5f5740c4-4925-4b31-a055-45993f3811b8" containerName="registry" containerID="cri-o://8670e560dc9b91bab57406729a34aeea560c4aefc9d2277faa478f3e38f18a9f" gracePeriod=30 Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.698541 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f5740c4-4925-4b31-a055-45993f3811b8" containerID="8670e560dc9b91bab57406729a34aeea560c4aefc9d2277faa478f3e38f18a9f" exitCode=0 Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.698597 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" event={"ID":"5f5740c4-4925-4b31-a055-45993f3811b8","Type":"ContainerDied","Data":"8670e560dc9b91bab57406729a34aeea560c4aefc9d2277faa478f3e38f18a9f"} Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.861815 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.957816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.957879 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.957907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.957931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crzxh\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.958073 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.958137 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.959334 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.959420 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca\") pod \"5f5740c4-4925-4b31-a055-45993f3811b8\" (UID: \"5f5740c4-4925-4b31-a055-45993f3811b8\") " Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.960215 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.960485 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.960831 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.966013 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.966299 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.966578 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh" (OuterVolumeSpecName: "kube-api-access-crzxh") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "kube-api-access-crzxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.970545 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.979101 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:03:38 crc kubenswrapper[4751]: I1123 04:03:38.987245 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5f5740c4-4925-4b31-a055-45993f3811b8" (UID: "5f5740c4-4925-4b31-a055-45993f3811b8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.061466 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.061705 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f5740c4-4925-4b31-a055-45993f3811b8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.061856 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crzxh\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-kube-api-access-crzxh\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.061961 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f5740c4-4925-4b31-a055-45993f3811b8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.062062 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f5740c4-4925-4b31-a055-45993f3811b8-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.062161 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f5740c4-4925-4b31-a055-45993f3811b8-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.707282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" event={"ID":"5f5740c4-4925-4b31-a055-45993f3811b8","Type":"ContainerDied","Data":"95e393fc7f70d2b46f5339075821c6f4724ccb849c4b885a55abf033cc4bc96c"} Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.707791 4751 scope.go:117] "RemoveContainer" containerID="8670e560dc9b91bab57406729a34aeea560c4aefc9d2277faa478f3e38f18a9f" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.707500 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p6j49" Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.752898 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 04:03:39 crc kubenswrapper[4751]: I1123 04:03:39.760782 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p6j49"] Nov 23 04:03:40 crc kubenswrapper[4751]: I1123 04:03:40.657024 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5740c4-4925-4b31-a055-45993f3811b8" path="/var/lib/kubelet/pods/5f5740c4-4925-4b31-a055-45993f3811b8/volumes" Nov 23 04:04:08 crc kubenswrapper[4751]: I1123 04:04:08.114698 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:04:08 crc kubenswrapper[4751]: I1123 04:04:08.115405 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.489363 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-n7m2q"] Nov 23 04:04:27 crc kubenswrapper[4751]: E1123 04:04:27.490246 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5740c4-4925-4b31-a055-45993f3811b8" containerName="registry" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.490265 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5740c4-4925-4b31-a055-45993f3811b8" containerName="registry" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.490494 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5740c4-4925-4b31-a055-45993f3811b8" containerName="registry" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.491013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.492863 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4krvg" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.492921 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.492932 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.498224 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-n7m2q"] Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.508860 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-52xvz"] Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.509840 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-52xvz" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.511939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lqc76" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.530277 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rcqpp"] Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.531266 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.551415 4751 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6rkx4" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.563502 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-52xvz"] Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.566685 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rcqpp"] Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.639639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8tm\" (UniqueName: \"kubernetes.io/projected/6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3-kube-api-access-bw8tm\") pod \"cert-manager-webhook-5655c58dd6-rcqpp\" (UID: \"6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.639699 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2jl\" (UniqueName: \"kubernetes.io/projected/ac86f9c3-7a1c-430f-abdc-3002de03a7df-kube-api-access-zz2jl\") pod \"cert-manager-cainjector-7f985d654d-n7m2q\" (UID: \"ac86f9c3-7a1c-430f-abdc-3002de03a7df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.639720 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmm7\" (UniqueName: \"kubernetes.io/projected/fcc30abb-9ab6-4b0f-b27c-8772f6026dd7-kube-api-access-xcmm7\") pod \"cert-manager-5b446d88c5-52xvz\" (UID: \"fcc30abb-9ab6-4b0f-b27c-8772f6026dd7\") " pod="cert-manager/cert-manager-5b446d88c5-52xvz" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.740761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8tm\" (UniqueName: \"kubernetes.io/projected/6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3-kube-api-access-bw8tm\") pod \"cert-manager-webhook-5655c58dd6-rcqpp\" (UID: \"6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.740874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2jl\" (UniqueName: \"kubernetes.io/projected/ac86f9c3-7a1c-430f-abdc-3002de03a7df-kube-api-access-zz2jl\") pod \"cert-manager-cainjector-7f985d654d-n7m2q\" (UID: \"ac86f9c3-7a1c-430f-abdc-3002de03a7df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.740933 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcmm7\" (UniqueName: \"kubernetes.io/projected/fcc30abb-9ab6-4b0f-b27c-8772f6026dd7-kube-api-access-xcmm7\") pod \"cert-manager-5b446d88c5-52xvz\" (UID: \"fcc30abb-9ab6-4b0f-b27c-8772f6026dd7\") " pod="cert-manager/cert-manager-5b446d88c5-52xvz" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.768009 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2jl\" (UniqueName: \"kubernetes.io/projected/ac86f9c3-7a1c-430f-abdc-3002de03a7df-kube-api-access-zz2jl\") pod \"cert-manager-cainjector-7f985d654d-n7m2q\" (UID: \"ac86f9c3-7a1c-430f-abdc-3002de03a7df\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.770658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcmm7\" (UniqueName: \"kubernetes.io/projected/fcc30abb-9ab6-4b0f-b27c-8772f6026dd7-kube-api-access-xcmm7\") pod \"cert-manager-5b446d88c5-52xvz\" (UID: \"fcc30abb-9ab6-4b0f-b27c-8772f6026dd7\") " pod="cert-manager/cert-manager-5b446d88c5-52xvz" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.782083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8tm\" (UniqueName: \"kubernetes.io/projected/6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3-kube-api-access-bw8tm\") pod \"cert-manager-webhook-5655c58dd6-rcqpp\" (UID: \"6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.813835 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.822585 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-52xvz" Nov 23 04:04:27 crc kubenswrapper[4751]: I1123 04:04:27.846423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:28 crc kubenswrapper[4751]: I1123 04:04:28.053400 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-n7m2q"] Nov 23 04:04:28 crc kubenswrapper[4751]: I1123 04:04:28.068759 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:04:28 crc kubenswrapper[4751]: I1123 04:04:28.090384 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-52xvz"] Nov 23 04:04:28 crc kubenswrapper[4751]: I1123 04:04:28.119216 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-rcqpp"] Nov 23 04:04:28 crc kubenswrapper[4751]: W1123 04:04:28.125080 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a01fd13_4ae3_4ea2_9fc1_e79f4b31e7a3.slice/crio-4fce7feb0543d56fba0bd64ac173065e401de7d0d611ee7550bb2515662052d8 WatchSource:0}: Error finding container 4fce7feb0543d56fba0bd64ac173065e401de7d0d611ee7550bb2515662052d8: Status 404 returned error can't find the container with id 4fce7feb0543d56fba0bd64ac173065e401de7d0d611ee7550bb2515662052d8 Nov 23 04:04:29 crc kubenswrapper[4751]: I1123 04:04:29.051827 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-52xvz" event={"ID":"fcc30abb-9ab6-4b0f-b27c-8772f6026dd7","Type":"ContainerStarted","Data":"9b9dc03bb830c56883e4b4120d1b5c71f22eb554c1ac3fd492bfb16e1e876476"} Nov 23 04:04:29 crc kubenswrapper[4751]: I1123 04:04:29.053727 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" event={"ID":"ac86f9c3-7a1c-430f-abdc-3002de03a7df","Type":"ContainerStarted","Data":"c7350676a8473761f8b883315710a3ba3aeb7c2e3c9161bb925f28ee83a9063a"} Nov 23 04:04:29 crc kubenswrapper[4751]: I1123 04:04:29.055015 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" event={"ID":"6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3","Type":"ContainerStarted","Data":"4fce7feb0543d56fba0bd64ac173065e401de7d0d611ee7550bb2515662052d8"} Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.075144 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" event={"ID":"ac86f9c3-7a1c-430f-abdc-3002de03a7df","Type":"ContainerStarted","Data":"f4171260c2a0a9dbc8158a0306f64dd7368eaa6d33190b4d7ddc54928a5e175e"} Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.081331 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" event={"ID":"6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3","Type":"ContainerStarted","Data":"1c6c6145b31f52dfd2650083dde1b055e98a0fb7cf163c8dd964b9b62f67042e"} Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.081467 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.084699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-52xvz" event={"ID":"fcc30abb-9ab6-4b0f-b27c-8772f6026dd7","Type":"ContainerStarted","Data":"cb9bcdf1aff216fdbfa737bdba92e9d13f12979e1dffbbdc8de9dc520fb9b074"} Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.115230 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-n7m2q" podStartSLOduration=1.445572582 podStartE2EDuration="5.115209758s" podCreationTimestamp="2025-11-23 04:04:27 +0000 UTC" firstStartedPulling="2025-11-23 04:04:28.068549044 +0000 UTC m=+564.262220403" lastFinishedPulling="2025-11-23 04:04:31.73818621 +0000 UTC m=+567.931857579" observedRunningTime="2025-11-23 04:04:32.094012932 +0000 UTC m=+568.287684301" watchObservedRunningTime="2025-11-23 04:04:32.115209758 +0000 UTC m=+568.308881117" Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.137597 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-52xvz" podStartSLOduration=1.474694167 podStartE2EDuration="5.137579186s" podCreationTimestamp="2025-11-23 04:04:27 +0000 UTC" firstStartedPulling="2025-11-23 04:04:28.100525993 +0000 UTC m=+564.294197352" lastFinishedPulling="2025-11-23 04:04:31.763410992 +0000 UTC m=+567.957082371" observedRunningTime="2025-11-23 04:04:32.116093972 +0000 UTC m=+568.309765331" watchObservedRunningTime="2025-11-23 04:04:32.137579186 +0000 UTC m=+568.331250545" Nov 23 04:04:32 crc kubenswrapper[4751]: I1123 04:04:32.138984 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" podStartSLOduration=1.519638958 podStartE2EDuration="5.138978813s" podCreationTimestamp="2025-11-23 04:04:27 +0000 UTC" firstStartedPulling="2025-11-23 04:04:28.127538873 +0000 UTC m=+564.321210232" lastFinishedPulling="2025-11-23 04:04:31.746878718 +0000 UTC m=+567.940550087" observedRunningTime="2025-11-23 04:04:32.135549062 +0000 UTC m=+568.329220431" watchObservedRunningTime="2025-11-23 04:04:32.138978813 +0000 UTC m=+568.332650172" Nov 23 04:04:37 crc kubenswrapper[4751]: I1123 04:04:37.849867 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-rcqpp" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.115340 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.115468 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.115538 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.116470 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.116606 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e" gracePeriod=600 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.280513 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nfjcv"] Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281474 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-controller" containerID="cri-o://dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281632 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="northd" containerID="cri-o://34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281661 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281591 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="sbdb" containerID="cri-o://e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281611 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-node" containerID="cri-o://559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281937 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-acl-logging" containerID="cri-o://59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.281544 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="nbdb" containerID="cri-o://b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.324698 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" containerID="cri-o://4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" gracePeriod=30 Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.635398 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/3.log" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.639011 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovn-acl-logging/0.log" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.640104 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovn-controller/0.log" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.640889 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715150 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7m8mf"] Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715334 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="northd" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715371 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="northd" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715383 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715389 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715398 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715404 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715411 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715416 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715424 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kubecfg-setup" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715430 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kubecfg-setup" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715439 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="nbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715445 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="nbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715451 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715457 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715465 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-node" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715471 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-node" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715482 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715489 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715499 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-acl-logging" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715505 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-acl-logging" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715514 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="sbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715519 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="sbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715609 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715617 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715626 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="northd" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715633 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715640 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-acl-logging" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715646 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovn-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715652 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="sbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715659 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="nbdb" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715666 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="kube-rbac-proxy-node" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715674 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715772 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715779 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: E1123 04:04:38.715787 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715793 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715883 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.715891 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" containerName="ovnkube-controller" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.717364 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804563 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804588 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804607 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804646 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804664 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804682 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804710 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804729 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804774 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rshhs\" (UniqueName: \"kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804811 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804826 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804863 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804886 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.804904 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config\") pod \"a97283a1-e673-4d60-889d-f0d483d72c37\" (UID: \"a97283a1-e673-4d60-889d-f0d483d72c37\") " Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805004 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-ovn\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805028 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-systemd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805056 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-env-overrides\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805074 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-log-socket\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-kubelet\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805104 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-systemd-units\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805121 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljt8s\" (UniqueName: \"kubernetes.io/projected/a0f48a99-2915-4ad7-84c9-acb2feb3967d-kube-api-access-ljt8s\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-node-log\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovn-node-metrics-cert\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805172 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-etc-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-netd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-config\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-bin\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805282 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-netns\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805298 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-script-lib\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-slash\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-var-lib-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log" (OuterVolumeSpecName: "node-log") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805869 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805910 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805938 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805943 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805980 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.805976 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806007 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806033 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806047 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket" (OuterVolumeSpecName: "log-socket") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806057 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash" (OuterVolumeSpecName: "host-slash") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806331 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.806404 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.810739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.810964 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs" (OuterVolumeSpecName: "kube-api-access-rshhs") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "kube-api-access-rshhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.817861 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a97283a1-e673-4d60-889d-f0d483d72c37" (UID: "a97283a1-e673-4d60-889d-f0d483d72c37"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.905867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.905902 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.905924 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-config\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.905974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.905976 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906108 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-bin\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906582 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-config\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906620 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-bin\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906657 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-netns\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-script-lib\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-run-netns\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906693 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-slash\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-var-lib-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-var-lib-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-slash\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906830 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-ovn\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-systemd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906888 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-ovn\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-env-overrides\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906933 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-log-socket\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906946 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-kubelet\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-run-systemd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.906987 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-log-socket\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-systemd-units\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-systemd-units\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-kubelet\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907193 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljt8s\" (UniqueName: \"kubernetes.io/projected/a0f48a99-2915-4ad7-84c9-acb2feb3967d-kube-api-access-ljt8s\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-node-log\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovn-node-metrics-cert\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-etc-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907270 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-netd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907314 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-etc-openvswitch\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907321 4751 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907335 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907418 4751 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907433 4751 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907441 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a97283a1-e673-4d60-889d-f0d483d72c37-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907449 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907457 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907466 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rshhs\" (UniqueName: \"kubernetes.io/projected/a97283a1-e673-4d60-889d-f0d483d72c37-kube-api-access-rshhs\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907474 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907481 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907489 4751 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907496 4751 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907507 4751 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907516 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907525 4751 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907533 4751 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907541 4751 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907549 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a97283a1-e673-4d60-889d-f0d483d72c37-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907557 4751 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907566 4751 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a97283a1-e673-4d60-889d-f0d483d72c37-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907369 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-host-cni-netd\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.907316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a0f48a99-2915-4ad7-84c9-acb2feb3967d-node-log\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.908152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovnkube-script-lib\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.908859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a0f48a99-2915-4ad7-84c9-acb2feb3967d-env-overrides\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.913509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a0f48a99-2915-4ad7-84c9-acb2feb3967d-ovn-node-metrics-cert\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:38 crc kubenswrapper[4751]: I1123 04:04:38.929393 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljt8s\" (UniqueName: \"kubernetes.io/projected/a0f48a99-2915-4ad7-84c9-acb2feb3967d-kube-api-access-ljt8s\") pod \"ovnkube-node-7m8mf\" (UID: \"a0f48a99-2915-4ad7-84c9-acb2feb3967d\") " pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.033397 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:39 crc kubenswrapper[4751]: W1123 04:04:39.057616 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f48a99_2915_4ad7_84c9_acb2feb3967d.slice/crio-803d8ca7cf25a94a46f8df023648a3a22f7d46830cf6850bbde285ee3f164e3a WatchSource:0}: Error finding container 803d8ca7cf25a94a46f8df023648a3a22f7d46830cf6850bbde285ee3f164e3a: Status 404 returned error can't find the container with id 803d8ca7cf25a94a46f8df023648a3a22f7d46830cf6850bbde285ee3f164e3a Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.176498 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"803d8ca7cf25a94a46f8df023648a3a22f7d46830cf6850bbde285ee3f164e3a"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.178692 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/2.log" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.179538 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/1.log" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.179663 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee318377-acb2-4f75-9414-02313f3824e0" containerID="4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff" exitCode=2 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.179811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerDied","Data":"4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.179876 4751 scope.go:117] "RemoveContainer" containerID="226a6c165c2d69c8ebed7d355bf103e9cc7f51421dccfb9c4b9b68a90159ce5d" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.180809 4751 scope.go:117] "RemoveContainer" containerID="4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.181202 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4dq7q_openshift-multus(ee318377-acb2-4f75-9414-02313f3824e0)\"" pod="openshift-multus/multus-4dq7q" podUID="ee318377-acb2-4f75-9414-02313f3824e0" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.185437 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.185492 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.185520 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.189617 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovnkube-controller/3.log" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.193089 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovn-acl-logging/0.log" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.193756 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nfjcv_a97283a1-e673-4d60-889d-f0d483d72c37/ovn-controller/0.log" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194168 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194190 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194200 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194294 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194209 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194327 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194375 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194326 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194416 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" exitCode=0 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194436 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" exitCode=143 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194449 4751 generic.go:334] "Generic (PLEG): container finished" podID="a97283a1-e673-4d60-889d-f0d483d72c37" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" exitCode=143 Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194485 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194497 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194508 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194518 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194527 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194536 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194545 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194555 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194564 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194574 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194604 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194616 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194625 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194634 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194642 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194654 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194664 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194673 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194682 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194691 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194717 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194728 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194739 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194748 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194758 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194770 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194781 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194791 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194799 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194808 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194821 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nfjcv" event={"ID":"a97283a1-e673-4d60-889d-f0d483d72c37","Type":"ContainerDied","Data":"5ade2dc3f463d1c6a3afbd8aed48b29630afab3ac1f2e4daa9e70a27c9263119"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194835 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194846 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194856 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194865 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194875 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194884 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194893 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194902 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194912 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.194921 4751 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.232094 4751 scope.go:117] "RemoveContainer" containerID="10f32d0d7f2e62c478ddb48470cd1194ce502e5eae263cb6ce53a7e62595816a" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.260709 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nfjcv"] Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.260769 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nfjcv"] Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.318732 4751 scope.go:117] "RemoveContainer" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.337386 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.354011 4751 scope.go:117] "RemoveContainer" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.372396 4751 scope.go:117] "RemoveContainer" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.383802 4751 scope.go:117] "RemoveContainer" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.395320 4751 scope.go:117] "RemoveContainer" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.410422 4751 scope.go:117] "RemoveContainer" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.427847 4751 scope.go:117] "RemoveContainer" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.446954 4751 scope.go:117] "RemoveContainer" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.480445 4751 scope.go:117] "RemoveContainer" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.512464 4751 scope.go:117] "RemoveContainer" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.513026 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": container with ID starting with 4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106 not found: ID does not exist" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.513080 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} err="failed to get container status \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": rpc error: code = NotFound desc = could not find container \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": container with ID starting with 4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.513113 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.513971 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": container with ID starting with d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae not found: ID does not exist" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514003 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} err="failed to get container status \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": rpc error: code = NotFound desc = could not find container \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": container with ID starting with d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514023 4751 scope.go:117] "RemoveContainer" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.514416 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": container with ID starting with e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc not found: ID does not exist" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514489 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} err="failed to get container status \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": rpc error: code = NotFound desc = could not find container \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": container with ID starting with e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514538 4751 scope.go:117] "RemoveContainer" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.514905 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": container with ID starting with b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124 not found: ID does not exist" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514940 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} err="failed to get container status \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": rpc error: code = NotFound desc = could not find container \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": container with ID starting with b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.514960 4751 scope.go:117] "RemoveContainer" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.515635 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": container with ID starting with 34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77 not found: ID does not exist" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.515665 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} err="failed to get container status \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": rpc error: code = NotFound desc = could not find container \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": container with ID starting with 34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.515687 4751 scope.go:117] "RemoveContainer" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.516133 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": container with ID starting with 3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5 not found: ID does not exist" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.516168 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} err="failed to get container status \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": rpc error: code = NotFound desc = could not find container \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": container with ID starting with 3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.516225 4751 scope.go:117] "RemoveContainer" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.516582 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": container with ID starting with 559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc not found: ID does not exist" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.516629 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} err="failed to get container status \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": rpc error: code = NotFound desc = could not find container \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": container with ID starting with 559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.516656 4751 scope.go:117] "RemoveContainer" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.516999 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": container with ID starting with 59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee not found: ID does not exist" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.517061 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} err="failed to get container status \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": rpc error: code = NotFound desc = could not find container \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": container with ID starting with 59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.517103 4751 scope.go:117] "RemoveContainer" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.517630 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": container with ID starting with dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3 not found: ID does not exist" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.517723 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} err="failed to get container status \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": rpc error: code = NotFound desc = could not find container \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": container with ID starting with dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.517779 4751 scope.go:117] "RemoveContainer" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" Nov 23 04:04:39 crc kubenswrapper[4751]: E1123 04:04:39.518483 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": container with ID starting with ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545 not found: ID does not exist" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.518513 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} err="failed to get container status \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": rpc error: code = NotFound desc = could not find container \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": container with ID starting with ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.518537 4751 scope.go:117] "RemoveContainer" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.518856 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} err="failed to get container status \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": rpc error: code = NotFound desc = could not find container \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": container with ID starting with 4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.518915 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.519236 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} err="failed to get container status \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": rpc error: code = NotFound desc = could not find container \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": container with ID starting with d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.519259 4751 scope.go:117] "RemoveContainer" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.519748 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} err="failed to get container status \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": rpc error: code = NotFound desc = could not find container \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": container with ID starting with e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.519801 4751 scope.go:117] "RemoveContainer" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.520266 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} err="failed to get container status \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": rpc error: code = NotFound desc = could not find container \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": container with ID starting with b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.520290 4751 scope.go:117] "RemoveContainer" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.520772 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} err="failed to get container status \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": rpc error: code = NotFound desc = could not find container \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": container with ID starting with 34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.520826 4751 scope.go:117] "RemoveContainer" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.521256 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} err="failed to get container status \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": rpc error: code = NotFound desc = could not find container \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": container with ID starting with 3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.521282 4751 scope.go:117] "RemoveContainer" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.521882 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} err="failed to get container status \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": rpc error: code = NotFound desc = could not find container \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": container with ID starting with 559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.521902 4751 scope.go:117] "RemoveContainer" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.522377 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} err="failed to get container status \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": rpc error: code = NotFound desc = could not find container \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": container with ID starting with 59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.522435 4751 scope.go:117] "RemoveContainer" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.522903 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} err="failed to get container status \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": rpc error: code = NotFound desc = could not find container \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": container with ID starting with dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.522925 4751 scope.go:117] "RemoveContainer" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.523418 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} err="failed to get container status \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": rpc error: code = NotFound desc = could not find container \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": container with ID starting with ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.523472 4751 scope.go:117] "RemoveContainer" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.524074 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} err="failed to get container status \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": rpc error: code = NotFound desc = could not find container \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": container with ID starting with 4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.524124 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.524549 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} err="failed to get container status \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": rpc error: code = NotFound desc = could not find container \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": container with ID starting with d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.524626 4751 scope.go:117] "RemoveContainer" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.525207 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} err="failed to get container status \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": rpc error: code = NotFound desc = could not find container \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": container with ID starting with e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.525229 4751 scope.go:117] "RemoveContainer" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.525863 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} err="failed to get container status \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": rpc error: code = NotFound desc = could not find container \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": container with ID starting with b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.525882 4751 scope.go:117] "RemoveContainer" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.526174 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} err="failed to get container status \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": rpc error: code = NotFound desc = could not find container \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": container with ID starting with 34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.526232 4751 scope.go:117] "RemoveContainer" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527050 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} err="failed to get container status \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": rpc error: code = NotFound desc = could not find container \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": container with ID starting with 3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527072 4751 scope.go:117] "RemoveContainer" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527541 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} err="failed to get container status \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": rpc error: code = NotFound desc = could not find container \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": container with ID starting with 559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527597 4751 scope.go:117] "RemoveContainer" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527938 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} err="failed to get container status \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": rpc error: code = NotFound desc = could not find container \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": container with ID starting with 59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.527964 4751 scope.go:117] "RemoveContainer" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.528265 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} err="failed to get container status \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": rpc error: code = NotFound desc = could not find container \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": container with ID starting with dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.528318 4751 scope.go:117] "RemoveContainer" containerID="ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.528689 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545"} err="failed to get container status \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": rpc error: code = NotFound desc = could not find container \"ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545\": container with ID starting with ec8da85ada0fdb0278c8abd9ac32aee6387199b9529b868b7e982c7d28408545 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.528738 4751 scope.go:117] "RemoveContainer" containerID="4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529011 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106"} err="failed to get container status \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": rpc error: code = NotFound desc = could not find container \"4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106\": container with ID starting with 4fa29f8620734386d614e583dfd237fac711483b5da80d2a57dc92c7339e9106 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529038 4751 scope.go:117] "RemoveContainer" containerID="d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529319 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae"} err="failed to get container status \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": rpc error: code = NotFound desc = could not find container \"d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae\": container with ID starting with d1919dc3d43888375553be730234d6f47c5e0b6666c7cf24656b9ddfdf041fae not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529430 4751 scope.go:117] "RemoveContainer" containerID="e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529803 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc"} err="failed to get container status \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": rpc error: code = NotFound desc = could not find container \"e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc\": container with ID starting with e9ef9576384c09927d62d377fa49d96b2791a6264daabbaf400a3b3ba5c681cc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.529870 4751 scope.go:117] "RemoveContainer" containerID="b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.530310 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124"} err="failed to get container status \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": rpc error: code = NotFound desc = could not find container \"b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124\": container with ID starting with b4a65dc29e0e8ea7f5f16c098b20ff98548f612b1f7f6307ccca9c242a7a6124 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.530358 4751 scope.go:117] "RemoveContainer" containerID="34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.530917 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77"} err="failed to get container status \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": rpc error: code = NotFound desc = could not find container \"34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77\": container with ID starting with 34c0e7ab49cd5b47b2d215c7ab99c28906ea725859e5aeeac288e7489ea56d77 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.530948 4751 scope.go:117] "RemoveContainer" containerID="3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.531210 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5"} err="failed to get container status \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": rpc error: code = NotFound desc = could not find container \"3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5\": container with ID starting with 3e664c12f91912d4ce0f3a60c23c2cf370c4d71ab6acaf2ffc12f2e6d90245a5 not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.531235 4751 scope.go:117] "RemoveContainer" containerID="559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.531767 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc"} err="failed to get container status \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": rpc error: code = NotFound desc = could not find container \"559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc\": container with ID starting with 559300202bde53c399d4dd5c8c9c5862bea51da48bad894feca7e2530ebf58dc not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.531828 4751 scope.go:117] "RemoveContainer" containerID="59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.532661 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee"} err="failed to get container status \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": rpc error: code = NotFound desc = could not find container \"59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee\": container with ID starting with 59e99c24f370ca53e8ba106a10f3997c0c3c64399f11963555014606a6d8a7ee not found: ID does not exist" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.532694 4751 scope.go:117] "RemoveContainer" containerID="dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3" Nov 23 04:04:39 crc kubenswrapper[4751]: I1123 04:04:39.533882 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3"} err="failed to get container status \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": rpc error: code = NotFound desc = could not find container \"dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3\": container with ID starting with dc0c2039ef88ff659b1c6e2743269e7f8b1188f937b900a3a91d0d74c32da7c3 not found: ID does not exist" Nov 23 04:04:40 crc kubenswrapper[4751]: I1123 04:04:40.205275 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/2.log" Nov 23 04:04:40 crc kubenswrapper[4751]: I1123 04:04:40.214748 4751 generic.go:334] "Generic (PLEG): container finished" podID="a0f48a99-2915-4ad7-84c9-acb2feb3967d" containerID="3d3d114b2cd48b0d0542a73ec21deef94eb995b748a04b9aed2801e7b8d5a72f" exitCode=0 Nov 23 04:04:40 crc kubenswrapper[4751]: I1123 04:04:40.214791 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerDied","Data":"3d3d114b2cd48b0d0542a73ec21deef94eb995b748a04b9aed2801e7b8d5a72f"} Nov 23 04:04:40 crc kubenswrapper[4751]: I1123 04:04:40.658867 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97283a1-e673-4d60-889d-f0d483d72c37" path="/var/lib/kubelet/pods/a97283a1-e673-4d60-889d-f0d483d72c37/volumes" Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.225821 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"d7651194086695fd77fb2163e3410a5b7a4fded01cb0917799700026f9dcac71"} Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.226137 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"6d59373c11885c8e8a30df7025acec451f2569e723b4e8ffc2662283ced19c53"} Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.226150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"c57b0f7ab5cae0ed407d288e9b6929c09f3a656b4d6f051c30d035967723c216"} Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.226160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"894b1efa95bec6b22516049816ee2eb15165f7b89e07826b77e2e12ec7c8102b"} Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.226171 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"d38e7435b8923c1d017129aceb926afd77c1f746b9dbe9d45873aa1e22ff1a71"} Nov 23 04:04:41 crc kubenswrapper[4751]: I1123 04:04:41.226180 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"8af5189b14a1ac100bc6a89cb8d0e646cfafc94e7e22e8597d699215661369f7"} Nov 23 04:04:44 crc kubenswrapper[4751]: I1123 04:04:44.259426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"0118986b65cfd9a3267e943eb99422f473c0b25fb290a21fd0b2abba6a29bca2"} Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.279023 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" event={"ID":"a0f48a99-2915-4ad7-84c9-acb2feb3967d","Type":"ContainerStarted","Data":"0e57b0570c0ddefb220a42a6b472c7f92bae155eaa935bb195c3c58f268316f9"} Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.279589 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.279607 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.279618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.308529 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.309913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:04:46 crc kubenswrapper[4751]: I1123 04:04:46.320113 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" podStartSLOduration=8.320094648 podStartE2EDuration="8.320094648s" podCreationTimestamp="2025-11-23 04:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:04:46.316282128 +0000 UTC m=+582.509953497" watchObservedRunningTime="2025-11-23 04:04:46.320094648 +0000 UTC m=+582.513766007" Nov 23 04:04:54 crc kubenswrapper[4751]: I1123 04:04:54.649632 4751 scope.go:117] "RemoveContainer" containerID="4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff" Nov 23 04:04:54 crc kubenswrapper[4751]: E1123 04:04:54.650489 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-4dq7q_openshift-multus(ee318377-acb2-4f75-9414-02313f3824e0)\"" pod="openshift-multus/multus-4dq7q" podUID="ee318377-acb2-4f75-9414-02313f3824e0" Nov 23 04:05:06 crc kubenswrapper[4751]: I1123 04:05:06.643759 4751 scope.go:117] "RemoveContainer" containerID="4ed20621c5838b3e4184fbcb2fd997c3e04eff21ce058ecbe3eba314d96adeff" Nov 23 04:05:07 crc kubenswrapper[4751]: I1123 04:05:07.428651 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4dq7q_ee318377-acb2-4f75-9414-02313f3824e0/kube-multus/2.log" Nov 23 04:05:07 crc kubenswrapper[4751]: I1123 04:05:07.429050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4dq7q" event={"ID":"ee318377-acb2-4f75-9414-02313f3824e0","Type":"ContainerStarted","Data":"218c97851849fe4b4ef47475d885278cd4d386bb6113a1e38afc24cc51464466"} Nov 23 04:05:09 crc kubenswrapper[4751]: I1123 04:05:09.068291 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7m8mf" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.472953 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl"] Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.474680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.476729 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.491520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl"] Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.639105 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.639173 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p2m4\" (UniqueName: \"kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.639219 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.740814 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.741341 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p2m4\" (UniqueName: \"kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.741674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.741863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.742287 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.775318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p2m4\" (UniqueName: \"kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:17 crc kubenswrapper[4751]: I1123 04:05:17.799422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:18 crc kubenswrapper[4751]: I1123 04:05:18.082419 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl"] Nov 23 04:05:18 crc kubenswrapper[4751]: I1123 04:05:18.496314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerStarted","Data":"2255194fe8ff5a6676f966ecd129be561f0df438ee6c9adbc400effd220c5bfd"} Nov 23 04:05:18 crc kubenswrapper[4751]: I1123 04:05:18.496919 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerStarted","Data":"847f8a2e0f0ec2720b17435b9057b9575627ba31eaac6f8e3a88b3c6bd87a199"} Nov 23 04:05:19 crc kubenswrapper[4751]: I1123 04:05:19.504267 4751 generic.go:334] "Generic (PLEG): container finished" podID="99fc265a-c0ec-49a7-a273-192d173d25df" containerID="2255194fe8ff5a6676f966ecd129be561f0df438ee6c9adbc400effd220c5bfd" exitCode=0 Nov 23 04:05:19 crc kubenswrapper[4751]: I1123 04:05:19.504326 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerDied","Data":"2255194fe8ff5a6676f966ecd129be561f0df438ee6c9adbc400effd220c5bfd"} Nov 23 04:05:22 crc kubenswrapper[4751]: I1123 04:05:22.524152 4751 generic.go:334] "Generic (PLEG): container finished" podID="99fc265a-c0ec-49a7-a273-192d173d25df" containerID="d64580b568ffd6c6e38e9619912631dd37bf7b65af18ec90a71e73239acf57ea" exitCode=0 Nov 23 04:05:22 crc kubenswrapper[4751]: I1123 04:05:22.524286 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerDied","Data":"d64580b568ffd6c6e38e9619912631dd37bf7b65af18ec90a71e73239acf57ea"} Nov 23 04:05:23 crc kubenswrapper[4751]: I1123 04:05:23.535401 4751 generic.go:334] "Generic (PLEG): container finished" podID="99fc265a-c0ec-49a7-a273-192d173d25df" containerID="0bffd2516aad33281b2657008847794b436cce3d3e7846036fda86ea4c918de3" exitCode=0 Nov 23 04:05:23 crc kubenswrapper[4751]: I1123 04:05:23.535459 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerDied","Data":"0bffd2516aad33281b2657008847794b436cce3d3e7846036fda86ea4c918de3"} Nov 23 04:05:24 crc kubenswrapper[4751]: I1123 04:05:24.925535 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.053709 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle\") pod \"99fc265a-c0ec-49a7-a273-192d173d25df\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.053867 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util\") pod \"99fc265a-c0ec-49a7-a273-192d173d25df\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.053910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p2m4\" (UniqueName: \"kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4\") pod \"99fc265a-c0ec-49a7-a273-192d173d25df\" (UID: \"99fc265a-c0ec-49a7-a273-192d173d25df\") " Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.054429 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle" (OuterVolumeSpecName: "bundle") pod "99fc265a-c0ec-49a7-a273-192d173d25df" (UID: "99fc265a-c0ec-49a7-a273-192d173d25df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.062991 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4" (OuterVolumeSpecName: "kube-api-access-6p2m4") pod "99fc265a-c0ec-49a7-a273-192d173d25df" (UID: "99fc265a-c0ec-49a7-a273-192d173d25df"). InnerVolumeSpecName "kube-api-access-6p2m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.079406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util" (OuterVolumeSpecName: "util") pod "99fc265a-c0ec-49a7-a273-192d173d25df" (UID: "99fc265a-c0ec-49a7-a273-192d173d25df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.155310 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-util\") on node \"crc\" DevicePath \"\"" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.155356 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p2m4\" (UniqueName: \"kubernetes.io/projected/99fc265a-c0ec-49a7-a273-192d173d25df-kube-api-access-6p2m4\") on node \"crc\" DevicePath \"\"" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.155369 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99fc265a-c0ec-49a7-a273-192d173d25df-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.551824 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" event={"ID":"99fc265a-c0ec-49a7-a273-192d173d25df","Type":"ContainerDied","Data":"847f8a2e0f0ec2720b17435b9057b9575627ba31eaac6f8e3a88b3c6bd87a199"} Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.551892 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl" Nov 23 04:05:25 crc kubenswrapper[4751]: I1123 04:05:25.551898 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847f8a2e0f0ec2720b17435b9057b9575627ba31eaac6f8e3a88b3c6bd87a199" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.094863 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dhvb2"] Nov 23 04:05:29 crc kubenswrapper[4751]: E1123 04:05:29.095113 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="extract" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.095128 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="extract" Nov 23 04:05:29 crc kubenswrapper[4751]: E1123 04:05:29.095142 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="pull" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.095149 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="pull" Nov 23 04:05:29 crc kubenswrapper[4751]: E1123 04:05:29.095168 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="util" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.095176 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="util" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.095280 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="99fc265a-c0ec-49a7-a273-192d173d25df" containerName="extract" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.095731 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.097973 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-k7pc9" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.101290 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.102239 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.113714 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dhvb2"] Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.206138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grzc\" (UniqueName: \"kubernetes.io/projected/9dff0d36-e2c4-4a48-a395-4ef9cae05540-kube-api-access-6grzc\") pod \"nmstate-operator-557fdffb88-dhvb2\" (UID: \"9dff0d36-e2c4-4a48-a395-4ef9cae05540\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.307536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grzc\" (UniqueName: \"kubernetes.io/projected/9dff0d36-e2c4-4a48-a395-4ef9cae05540-kube-api-access-6grzc\") pod \"nmstate-operator-557fdffb88-dhvb2\" (UID: \"9dff0d36-e2c4-4a48-a395-4ef9cae05540\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.341867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grzc\" (UniqueName: \"kubernetes.io/projected/9dff0d36-e2c4-4a48-a395-4ef9cae05540-kube-api-access-6grzc\") pod \"nmstate-operator-557fdffb88-dhvb2\" (UID: \"9dff0d36-e2c4-4a48-a395-4ef9cae05540\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.423185 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" Nov 23 04:05:29 crc kubenswrapper[4751]: I1123 04:05:29.681017 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-dhvb2"] Nov 23 04:05:29 crc kubenswrapper[4751]: W1123 04:05:29.697013 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dff0d36_e2c4_4a48_a395_4ef9cae05540.slice/crio-d765f711a29e3358ca55312d0cd835059100991058402aea39d28ec19ace632b WatchSource:0}: Error finding container d765f711a29e3358ca55312d0cd835059100991058402aea39d28ec19ace632b: Status 404 returned error can't find the container with id d765f711a29e3358ca55312d0cd835059100991058402aea39d28ec19ace632b Nov 23 04:05:30 crc kubenswrapper[4751]: I1123 04:05:30.589042 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" event={"ID":"9dff0d36-e2c4-4a48-a395-4ef9cae05540","Type":"ContainerStarted","Data":"d765f711a29e3358ca55312d0cd835059100991058402aea39d28ec19ace632b"} Nov 23 04:05:32 crc kubenswrapper[4751]: I1123 04:05:32.603615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" event={"ID":"9dff0d36-e2c4-4a48-a395-4ef9cae05540","Type":"ContainerStarted","Data":"5cdd632400aa79c40105531526b5e0640420e76e4b882ff947d5ab40a8ae0e99"} Nov 23 04:05:32 crc kubenswrapper[4751]: I1123 04:05:32.634970 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-dhvb2" podStartSLOduration=1.409645745 podStartE2EDuration="3.634934301s" podCreationTimestamp="2025-11-23 04:05:29 +0000 UTC" firstStartedPulling="2025-11-23 04:05:29.699429136 +0000 UTC m=+625.893100495" lastFinishedPulling="2025-11-23 04:05:31.924717692 +0000 UTC m=+628.118389051" observedRunningTime="2025-11-23 04:05:32.631946252 +0000 UTC m=+628.825617651" watchObservedRunningTime="2025-11-23 04:05:32.634934301 +0000 UTC m=+628.828605700" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.006024 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.007756 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.017506 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-z5zqd" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.018808 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.019962 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.022047 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.023629 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.046943 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.065743 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q9z6z"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.066619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghshm\" (UniqueName: \"kubernetes.io/projected/67521947-5803-47c5-95ee-ff1331b80d30-kube-api-access-ghshm\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-nmstate-lock\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-ovs-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvb2d\" (UniqueName: \"kubernetes.io/projected/b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5-kube-api-access-lvb2d\") pod \"nmstate-metrics-5dcf9c57c5-k5jpj\" (UID: \"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123790 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfml\" (UniqueName: \"kubernetes.io/projected/ac9fa491-4c47-4862-bb2f-96dd556da176-kube-api-access-jxfml\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.123861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-dbus-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.137117 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.137930 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.140045 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.140233 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rnx8l" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.140386 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.175716 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224573 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b49364f-9a9b-4be9-b128-1a1b708073cc-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224631 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvb2d\" (UniqueName: \"kubernetes.io/projected/b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5-kube-api-access-lvb2d\") pod \"nmstate-metrics-5dcf9c57c5-k5jpj\" (UID: \"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxfml\" (UniqueName: \"kubernetes.io/projected/ac9fa491-4c47-4862-bb2f-96dd556da176-kube-api-access-jxfml\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224678 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-dbus-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224702 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4kdd\" (UniqueName: \"kubernetes.io/projected/6b49364f-9a9b-4be9-b128-1a1b708073cc-kube-api-access-f4kdd\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghshm\" (UniqueName: \"kubernetes.io/projected/67521947-5803-47c5-95ee-ff1331b80d30-kube-api-access-ghshm\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224798 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224825 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6b49364f-9a9b-4be9-b128-1a1b708073cc-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-nmstate-lock\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-ovs-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.224952 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-ovs-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.225244 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-nmstate-lock\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: E1123 04:05:38.225376 4751 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 23 04:05:38 crc kubenswrapper[4751]: E1123 04:05:38.225452 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair podName:67521947-5803-47c5-95ee-ff1331b80d30 nodeName:}" failed. No retries permitted until 2025-11-23 04:05:38.725429308 +0000 UTC m=+634.919100717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair") pod "nmstate-webhook-6b89b748d8-8tdcc" (UID: "67521947-5803-47c5-95ee-ff1331b80d30") : secret "openshift-nmstate-webhook" not found Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.228771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac9fa491-4c47-4862-bb2f-96dd556da176-dbus-socket\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.244943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxfml\" (UniqueName: \"kubernetes.io/projected/ac9fa491-4c47-4862-bb2f-96dd556da176-kube-api-access-jxfml\") pod \"nmstate-handler-q9z6z\" (UID: \"ac9fa491-4c47-4862-bb2f-96dd556da176\") " pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.248689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghshm\" (UniqueName: \"kubernetes.io/projected/67521947-5803-47c5-95ee-ff1331b80d30-kube-api-access-ghshm\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.249484 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvb2d\" (UniqueName: \"kubernetes.io/projected/b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5-kube-api-access-lvb2d\") pod \"nmstate-metrics-5dcf9c57c5-k5jpj\" (UID: \"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.325415 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.325723 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6b49364f-9a9b-4be9-b128-1a1b708073cc-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.325775 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b49364f-9a9b-4be9-b128-1a1b708073cc-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.325805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4kdd\" (UniqueName: \"kubernetes.io/projected/6b49364f-9a9b-4be9-b128-1a1b708073cc-kube-api-access-f4kdd\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.326784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6b49364f-9a9b-4be9-b128-1a1b708073cc-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.328736 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-744679cbdb-pbvbm"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.329316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b49364f-9a9b-4be9-b128-1a1b708073cc-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.329678 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.361320 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744679cbdb-pbvbm"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.368184 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4kdd\" (UniqueName: \"kubernetes.io/projected/6b49364f-9a9b-4be9-b128-1a1b708073cc-kube-api-access-f4kdd\") pod \"nmstate-console-plugin-5874bd7bc5-flnqj\" (UID: \"6b49364f-9a9b-4be9-b128-1a1b708073cc\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.384334 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-service-ca\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95sn\" (UniqueName: \"kubernetes.io/projected/89f6a08d-20d7-4641-94bb-6cada97ff6fd-kube-api-access-c95sn\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427342 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-oauth-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-oauth-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.427412 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-trusted-ca-bundle\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.471639 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-service-ca\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95sn\" (UniqueName: \"kubernetes.io/projected/89f6a08d-20d7-4641-94bb-6cada97ff6fd-kube-api-access-c95sn\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531516 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-oauth-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-oauth-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531595 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.531618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-trusted-ca-bundle\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.533909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-service-ca\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.534329 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-oauth-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.535398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.535869 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj"] Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.537950 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-oauth-config\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.539743 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/89f6a08d-20d7-4641-94bb-6cada97ff6fd-console-serving-cert\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.540067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f6a08d-20d7-4641-94bb-6cada97ff6fd-trusted-ca-bundle\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: W1123 04:05:38.544362 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb42c1e88_21f9_4d2b_86dc_b6ed330d3eb5.slice/crio-48e09febedf9703457ed244955d73d8ea419a9ed9a38d7d6e667ee87259a65ad WatchSource:0}: Error finding container 48e09febedf9703457ed244955d73d8ea419a9ed9a38d7d6e667ee87259a65ad: Status 404 returned error can't find the container with id 48e09febedf9703457ed244955d73d8ea419a9ed9a38d7d6e667ee87259a65ad Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.549599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95sn\" (UniqueName: \"kubernetes.io/projected/89f6a08d-20d7-4641-94bb-6cada97ff6fd-kube-api-access-c95sn\") pod \"console-744679cbdb-pbvbm\" (UID: \"89f6a08d-20d7-4641-94bb-6cada97ff6fd\") " pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.642457 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" event={"ID":"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5","Type":"ContainerStarted","Data":"48e09febedf9703457ed244955d73d8ea419a9ed9a38d7d6e667ee87259a65ad"} Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.649970 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q9z6z" event={"ID":"ac9fa491-4c47-4862-bb2f-96dd556da176","Type":"ContainerStarted","Data":"3158679532c1720b00b0371abce787f48cd62c02a4cd831fa5c5c93533198714"} Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.663878 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj"] Nov 23 04:05:38 crc kubenswrapper[4751]: W1123 04:05:38.673700 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b49364f_9a9b_4be9_b128_1a1b708073cc.slice/crio-961e9666f1d48bc48bfeccd4a157d538b857dec8a041288928ca95ca1e61faa7 WatchSource:0}: Error finding container 961e9666f1d48bc48bfeccd4a157d538b857dec8a041288928ca95ca1e61faa7: Status 404 returned error can't find the container with id 961e9666f1d48bc48bfeccd4a157d538b857dec8a041288928ca95ca1e61faa7 Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.706040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.736062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.743322 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/67521947-5803-47c5-95ee-ff1331b80d30-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-8tdcc\" (UID: \"67521947-5803-47c5-95ee-ff1331b80d30\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.943158 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:38 crc kubenswrapper[4751]: I1123 04:05:38.946889 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-744679cbdb-pbvbm"] Nov 23 04:05:38 crc kubenswrapper[4751]: W1123 04:05:38.958166 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f6a08d_20d7_4641_94bb_6cada97ff6fd.slice/crio-4d76dee4e75ce4c7cfa95d83af9479504fb5c8519189b7f754f727e1191f2a75 WatchSource:0}: Error finding container 4d76dee4e75ce4c7cfa95d83af9479504fb5c8519189b7f754f727e1191f2a75: Status 404 returned error can't find the container with id 4d76dee4e75ce4c7cfa95d83af9479504fb5c8519189b7f754f727e1191f2a75 Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.139603 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc"] Nov 23 04:05:39 crc kubenswrapper[4751]: W1123 04:05:39.154492 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67521947_5803_47c5_95ee_ff1331b80d30.slice/crio-a12f9dc1b21fdfb85198cf9369384e57facc89e8a28158f97e6edf3ee3d33f59 WatchSource:0}: Error finding container a12f9dc1b21fdfb85198cf9369384e57facc89e8a28158f97e6edf3ee3d33f59: Status 404 returned error can't find the container with id a12f9dc1b21fdfb85198cf9369384e57facc89e8a28158f97e6edf3ee3d33f59 Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.654987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" event={"ID":"6b49364f-9a9b-4be9-b128-1a1b708073cc","Type":"ContainerStarted","Data":"961e9666f1d48bc48bfeccd4a157d538b857dec8a041288928ca95ca1e61faa7"} Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.656954 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" event={"ID":"67521947-5803-47c5-95ee-ff1331b80d30","Type":"ContainerStarted","Data":"a12f9dc1b21fdfb85198cf9369384e57facc89e8a28158f97e6edf3ee3d33f59"} Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.659282 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744679cbdb-pbvbm" event={"ID":"89f6a08d-20d7-4641-94bb-6cada97ff6fd","Type":"ContainerStarted","Data":"c3b711b3f4eab97ebb33d637d0867d0a1c56dcc61b49d46f0f492f968fc878ca"} Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.659318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-744679cbdb-pbvbm" event={"ID":"89f6a08d-20d7-4641-94bb-6cada97ff6fd","Type":"ContainerStarted","Data":"4d76dee4e75ce4c7cfa95d83af9479504fb5c8519189b7f754f727e1191f2a75"} Nov 23 04:05:39 crc kubenswrapper[4751]: I1123 04:05:39.675330 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-744679cbdb-pbvbm" podStartSLOduration=1.6753091759999998 podStartE2EDuration="1.675309176s" podCreationTimestamp="2025-11-23 04:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:05:39.67508332 +0000 UTC m=+635.868754699" watchObservedRunningTime="2025-11-23 04:05:39.675309176 +0000 UTC m=+635.868980545" Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.679154 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q9z6z" event={"ID":"ac9fa491-4c47-4862-bb2f-96dd556da176","Type":"ContainerStarted","Data":"09d1ac4d6525783de70952888dc027eb612212b2394ea7c51149b3bf848cd139"} Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.679883 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.682592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" event={"ID":"6b49364f-9a9b-4be9-b128-1a1b708073cc","Type":"ContainerStarted","Data":"7c96cfa6b528317f5a03f463263ecdb1801fae8680d2bf2c436e76c339ccda0f"} Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.685545 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" event={"ID":"67521947-5803-47c5-95ee-ff1331b80d30","Type":"ContainerStarted","Data":"00017b0eac1bc363aedf0aa7d838ec585df81283f5f00b06d3cb77260ab86242"} Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.686021 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.688241 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" event={"ID":"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5","Type":"ContainerStarted","Data":"43649fd9b93ca675a29b7320e6ff69bab9ac50976111df59701a994acb65d790"} Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.698989 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q9z6z" podStartSLOduration=1.361834884 podStartE2EDuration="4.69895577s" podCreationTimestamp="2025-11-23 04:05:38 +0000 UTC" firstStartedPulling="2025-11-23 04:05:38.405576358 +0000 UTC m=+634.599247717" lastFinishedPulling="2025-11-23 04:05:41.742697204 +0000 UTC m=+637.936368603" observedRunningTime="2025-11-23 04:05:42.695006937 +0000 UTC m=+638.888678326" watchObservedRunningTime="2025-11-23 04:05:42.69895577 +0000 UTC m=+638.892627179" Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.717830 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" podStartSLOduration=3.101664068 podStartE2EDuration="5.717808914s" podCreationTimestamp="2025-11-23 04:05:37 +0000 UTC" firstStartedPulling="2025-11-23 04:05:39.157851229 +0000 UTC m=+635.351522588" lastFinishedPulling="2025-11-23 04:05:41.773996065 +0000 UTC m=+637.967667434" observedRunningTime="2025-11-23 04:05:42.713725147 +0000 UTC m=+638.907396546" watchObservedRunningTime="2025-11-23 04:05:42.717808914 +0000 UTC m=+638.911480293" Nov 23 04:05:42 crc kubenswrapper[4751]: I1123 04:05:42.738819 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-flnqj" podStartSLOduration=1.688188105 podStartE2EDuration="4.738751483s" podCreationTimestamp="2025-11-23 04:05:38 +0000 UTC" firstStartedPulling="2025-11-23 04:05:38.677128683 +0000 UTC m=+634.870800052" lastFinishedPulling="2025-11-23 04:05:41.727692031 +0000 UTC m=+637.921363430" observedRunningTime="2025-11-23 04:05:42.734330527 +0000 UTC m=+638.928001896" watchObservedRunningTime="2025-11-23 04:05:42.738751483 +0000 UTC m=+638.932422922" Nov 23 04:05:45 crc kubenswrapper[4751]: I1123 04:05:45.713462 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" event={"ID":"b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5","Type":"ContainerStarted","Data":"d727c2b7aff53a7696ab197f39fbec48336706e9449837fa09033cfa3390b1cc"} Nov 23 04:05:45 crc kubenswrapper[4751]: I1123 04:05:45.746103 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-k5jpj" podStartSLOduration=2.692488296 podStartE2EDuration="8.746071778s" podCreationTimestamp="2025-11-23 04:05:37 +0000 UTC" firstStartedPulling="2025-11-23 04:05:38.547223589 +0000 UTC m=+634.740894958" lastFinishedPulling="2025-11-23 04:05:44.600807041 +0000 UTC m=+640.794478440" observedRunningTime="2025-11-23 04:05:45.737276098 +0000 UTC m=+641.930947497" watchObservedRunningTime="2025-11-23 04:05:45.746071778 +0000 UTC m=+641.939743177" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.429990 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q9z6z" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.706388 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.706445 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.714082 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.746072 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-744679cbdb-pbvbm" Nov 23 04:05:48 crc kubenswrapper[4751]: I1123 04:05:48.803048 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 04:05:58 crc kubenswrapper[4751]: I1123 04:05:58.953018 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-8tdcc" Nov 23 04:06:13 crc kubenswrapper[4751]: I1123 04:06:13.862224 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bbrhw" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" containerID="cri-o://2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b" gracePeriod=15 Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.330714 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bbrhw_4baedc4d-15a1-49d0-b82f-a57fce419702/console/0.log" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.331210 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477632 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477781 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477857 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bdn\" (UniqueName: \"kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477946 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.477989 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config\") pod \"4baedc4d-15a1-49d0-b82f-a57fce419702\" (UID: \"4baedc4d-15a1-49d0-b82f-a57fce419702\") " Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.479371 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca" (OuterVolumeSpecName: "service-ca") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.479850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.480767 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config" (OuterVolumeSpecName: "console-config") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.480879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.481078 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv"] Nov 23 04:06:14 crc kubenswrapper[4751]: E1123 04:06:14.481326 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.481372 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.481516 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerName="console" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.482552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.484963 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.497571 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv"] Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.501501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn" (OuterVolumeSpecName: "kube-api-access-b6bdn") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "kube-api-access-b6bdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.501548 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.501921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4baedc4d-15a1-49d0-b82f-a57fce419702" (UID: "4baedc4d-15a1-49d0-b82f-a57fce419702"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579241 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpm8\" (UniqueName: \"kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579318 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579938 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579975 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.579997 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bdn\" (UniqueName: \"kubernetes.io/projected/4baedc4d-15a1-49d0-b82f-a57fce419702-kube-api-access-b6bdn\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.580016 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.580035 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.580054 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4baedc4d-15a1-49d0-b82f-a57fce419702-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.580072 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4baedc4d-15a1-49d0-b82f-a57fce419702-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.682043 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.682153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpm8\" (UniqueName: \"kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.682203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.682995 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.683850 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.711696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpm8\" (UniqueName: \"kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.850918 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932682 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bbrhw_4baedc4d-15a1-49d0-b82f-a57fce419702/console/0.log" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932760 4751 generic.go:334] "Generic (PLEG): container finished" podID="4baedc4d-15a1-49d0-b82f-a57fce419702" containerID="2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b" exitCode=2 Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbrhw" event={"ID":"4baedc4d-15a1-49d0-b82f-a57fce419702","Type":"ContainerDied","Data":"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b"} Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbrhw" event={"ID":"4baedc4d-15a1-49d0-b82f-a57fce419702","Type":"ContainerDied","Data":"d726835aeb97691c5993b1153d2436ee4ce21073426df601aac1a9e4895d6c7f"} Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932873 4751 scope.go:117] "RemoveContainer" containerID="2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.932908 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbrhw" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.955433 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.962493 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bbrhw"] Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.970003 4751 scope.go:117] "RemoveContainer" containerID="2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b" Nov 23 04:06:14 crc kubenswrapper[4751]: E1123 04:06:14.970648 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b\": container with ID starting with 2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b not found: ID does not exist" containerID="2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b" Nov 23 04:06:14 crc kubenswrapper[4751]: I1123 04:06:14.970803 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b"} err="failed to get container status \"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b\": rpc error: code = NotFound desc = could not find container \"2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b\": container with ID starting with 2a143ca6b3c01f8ed7a7f13086cb2870d02ae19708c0642bab311d7856de709b not found: ID does not exist" Nov 23 04:06:15 crc kubenswrapper[4751]: I1123 04:06:15.153576 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv"] Nov 23 04:06:15 crc kubenswrapper[4751]: I1123 04:06:15.944208 4751 generic.go:334] "Generic (PLEG): container finished" podID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerID="aed0343f682706718f43be5af8df6eb60a81403d0083a36d6d4a85da87b2e5df" exitCode=0 Nov 23 04:06:15 crc kubenswrapper[4751]: I1123 04:06:15.945368 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" event={"ID":"f23641b9-2eca-418a-90e9-13dd56c87cb8","Type":"ContainerDied","Data":"aed0343f682706718f43be5af8df6eb60a81403d0083a36d6d4a85da87b2e5df"} Nov 23 04:06:15 crc kubenswrapper[4751]: I1123 04:06:15.945551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" event={"ID":"f23641b9-2eca-418a-90e9-13dd56c87cb8","Type":"ContainerStarted","Data":"fd5283316b3d6e17cf6362ed33e3122928f53a4e4eb9c63a7ff17a546befad66"} Nov 23 04:06:16 crc kubenswrapper[4751]: I1123 04:06:16.657657 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4baedc4d-15a1-49d0-b82f-a57fce419702" path="/var/lib/kubelet/pods/4baedc4d-15a1-49d0-b82f-a57fce419702/volumes" Nov 23 04:06:17 crc kubenswrapper[4751]: I1123 04:06:17.968925 4751 generic.go:334] "Generic (PLEG): container finished" podID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerID="2d77ae2c65f9cdabd3f86646330502b89154d4df8f7e6f9fe338088f9931fd81" exitCode=0 Nov 23 04:06:17 crc kubenswrapper[4751]: I1123 04:06:17.969230 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" event={"ID":"f23641b9-2eca-418a-90e9-13dd56c87cb8","Type":"ContainerDied","Data":"2d77ae2c65f9cdabd3f86646330502b89154d4df8f7e6f9fe338088f9931fd81"} Nov 23 04:06:18 crc kubenswrapper[4751]: I1123 04:06:18.981449 4751 generic.go:334] "Generic (PLEG): container finished" podID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerID="eb1e72a0c73bb69b517726a26fa64e1d1e4ca17bcd1990f17370bdb6839d4f03" exitCode=0 Nov 23 04:06:18 crc kubenswrapper[4751]: I1123 04:06:18.981497 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" event={"ID":"f23641b9-2eca-418a-90e9-13dd56c87cb8","Type":"ContainerDied","Data":"eb1e72a0c73bb69b517726a26fa64e1d1e4ca17bcd1990f17370bdb6839d4f03"} Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.340523 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.472172 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle\") pod \"f23641b9-2eca-418a-90e9-13dd56c87cb8\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.472501 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpm8\" (UniqueName: \"kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8\") pod \"f23641b9-2eca-418a-90e9-13dd56c87cb8\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.472594 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util\") pod \"f23641b9-2eca-418a-90e9-13dd56c87cb8\" (UID: \"f23641b9-2eca-418a-90e9-13dd56c87cb8\") " Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.474089 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle" (OuterVolumeSpecName: "bundle") pod "f23641b9-2eca-418a-90e9-13dd56c87cb8" (UID: "f23641b9-2eca-418a-90e9-13dd56c87cb8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.482237 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8" (OuterVolumeSpecName: "kube-api-access-2xpm8") pod "f23641b9-2eca-418a-90e9-13dd56c87cb8" (UID: "f23641b9-2eca-418a-90e9-13dd56c87cb8"). InnerVolumeSpecName "kube-api-access-2xpm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.503704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util" (OuterVolumeSpecName: "util") pod "f23641b9-2eca-418a-90e9-13dd56c87cb8" (UID: "f23641b9-2eca-418a-90e9-13dd56c87cb8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.574557 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-util\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.574615 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f23641b9-2eca-418a-90e9-13dd56c87cb8-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:20 crc kubenswrapper[4751]: I1123 04:06:20.574636 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xpm8\" (UniqueName: \"kubernetes.io/projected/f23641b9-2eca-418a-90e9-13dd56c87cb8-kube-api-access-2xpm8\") on node \"crc\" DevicePath \"\"" Nov 23 04:06:21 crc kubenswrapper[4751]: I1123 04:06:21.001614 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" event={"ID":"f23641b9-2eca-418a-90e9-13dd56c87cb8","Type":"ContainerDied","Data":"fd5283316b3d6e17cf6362ed33e3122928f53a4e4eb9c63a7ff17a546befad66"} Nov 23 04:06:21 crc kubenswrapper[4751]: I1123 04:06:21.001712 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5283316b3d6e17cf6362ed33e3122928f53a4e4eb9c63a7ff17a546befad66" Nov 23 04:06:21 crc kubenswrapper[4751]: I1123 04:06:21.001638 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.207684 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9"] Nov 23 04:06:31 crc kubenswrapper[4751]: E1123 04:06:31.208469 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="extract" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.208486 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="extract" Nov 23 04:06:31 crc kubenswrapper[4751]: E1123 04:06:31.208507 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="util" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.208516 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="util" Nov 23 04:06:31 crc kubenswrapper[4751]: E1123 04:06:31.208534 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="pull" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.208544 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="pull" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.208679 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f23641b9-2eca-418a-90e9-13dd56c87cb8" containerName="extract" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.209210 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.210784 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.211607 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.211777 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.212538 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qhqvg" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.215553 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.235447 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9"] Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.238291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mr4t\" (UniqueName: \"kubernetes.io/projected/ed440de8-4a60-48c8-85e5-a0431415aa1e-kube-api-access-4mr4t\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.238361 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-apiservice-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.238443 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-webhook-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.339184 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mr4t\" (UniqueName: \"kubernetes.io/projected/ed440de8-4a60-48c8-85e5-a0431415aa1e-kube-api-access-4mr4t\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.339234 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-apiservice-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.339283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-webhook-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.344724 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-webhook-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.351987 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ed440de8-4a60-48c8-85e5-a0431415aa1e-apiservice-cert\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.376903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mr4t\" (UniqueName: \"kubernetes.io/projected/ed440de8-4a60-48c8-85e5-a0431415aa1e-kube-api-access-4mr4t\") pod \"metallb-operator-controller-manager-859f4d786d-lx7n9\" (UID: \"ed440de8-4a60-48c8-85e5-a0431415aa1e\") " pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.523580 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7577964656-7fb5v"] Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.524586 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.526526 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.526545 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.526554 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.526646 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nbm7f" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.545818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7577964656-7fb5v"] Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.641839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-webhook-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.642099 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-apiservice-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.642140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvrm\" (UniqueName: \"kubernetes.io/projected/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-kube-api-access-txvrm\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.743110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-apiservice-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.743223 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvrm\" (UniqueName: \"kubernetes.io/projected/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-kube-api-access-txvrm\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.743366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-webhook-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.751835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-webhook-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.755913 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-apiservice-cert\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.766042 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9"] Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.773824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvrm\" (UniqueName: \"kubernetes.io/projected/5f06ddd2-0977-4bb4-954a-8bff2da8d49a-kube-api-access-txvrm\") pod \"metallb-operator-webhook-server-7577964656-7fb5v\" (UID: \"5f06ddd2-0977-4bb4-954a-8bff2da8d49a\") " pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:31 crc kubenswrapper[4751]: W1123 04:06:31.782106 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded440de8_4a60_48c8_85e5_a0431415aa1e.slice/crio-a7af04c4d59855bf445b5dcef47876c5fcb892224d6993a75d98b5519f14a27f WatchSource:0}: Error finding container a7af04c4d59855bf445b5dcef47876c5fcb892224d6993a75d98b5519f14a27f: Status 404 returned error can't find the container with id a7af04c4d59855bf445b5dcef47876c5fcb892224d6993a75d98b5519f14a27f Nov 23 04:06:31 crc kubenswrapper[4751]: I1123 04:06:31.884359 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:32 crc kubenswrapper[4751]: I1123 04:06:32.075328 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" event={"ID":"ed440de8-4a60-48c8-85e5-a0431415aa1e","Type":"ContainerStarted","Data":"a7af04c4d59855bf445b5dcef47876c5fcb892224d6993a75d98b5519f14a27f"} Nov 23 04:06:32 crc kubenswrapper[4751]: I1123 04:06:32.081488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7577964656-7fb5v"] Nov 23 04:06:33 crc kubenswrapper[4751]: I1123 04:06:33.083186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" event={"ID":"5f06ddd2-0977-4bb4-954a-8bff2da8d49a","Type":"ContainerStarted","Data":"b9a74009bd94c00c5b4b186a19ffa6a1e198cc190987baa7077c123ee0753fe3"} Nov 23 04:06:37 crc kubenswrapper[4751]: I1123 04:06:37.113567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" event={"ID":"5f06ddd2-0977-4bb4-954a-8bff2da8d49a","Type":"ContainerStarted","Data":"670bd1fa9791a7c2a0697063210b8e6c21bb7a2b8de5f5d89270c97d89726c9d"} Nov 23 04:06:37 crc kubenswrapper[4751]: I1123 04:06:37.113879 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:37 crc kubenswrapper[4751]: I1123 04:06:37.117636 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" event={"ID":"ed440de8-4a60-48c8-85e5-a0431415aa1e","Type":"ContainerStarted","Data":"3651cd41427e9f30f8673de44147dd267b3f49f0c646aa73aff26d9fce274ece"} Nov 23 04:06:37 crc kubenswrapper[4751]: I1123 04:06:37.118224 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:06:37 crc kubenswrapper[4751]: I1123 04:06:37.137356 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" podStartSLOduration=2.22999933 podStartE2EDuration="6.137325744s" podCreationTimestamp="2025-11-23 04:06:31 +0000 UTC" firstStartedPulling="2025-11-23 04:06:32.094059664 +0000 UTC m=+688.287731023" lastFinishedPulling="2025-11-23 04:06:36.001386038 +0000 UTC m=+692.195057437" observedRunningTime="2025-11-23 04:06:37.136195405 +0000 UTC m=+693.329866774" watchObservedRunningTime="2025-11-23 04:06:37.137325744 +0000 UTC m=+693.330997113" Nov 23 04:06:38 crc kubenswrapper[4751]: I1123 04:06:38.114894 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:06:38 crc kubenswrapper[4751]: I1123 04:06:38.115448 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:06:51 crc kubenswrapper[4751]: I1123 04:06:51.896393 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7577964656-7fb5v" Nov 23 04:06:51 crc kubenswrapper[4751]: I1123 04:06:51.928797 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" podStartSLOduration=16.773690805 podStartE2EDuration="20.928774883s" podCreationTimestamp="2025-11-23 04:06:31 +0000 UTC" firstStartedPulling="2025-11-23 04:06:31.784846292 +0000 UTC m=+687.978517651" lastFinishedPulling="2025-11-23 04:06:35.93993037 +0000 UTC m=+692.133601729" observedRunningTime="2025-11-23 04:06:37.159511915 +0000 UTC m=+693.353183284" watchObservedRunningTime="2025-11-23 04:06:51.928774883 +0000 UTC m=+708.122446252" Nov 23 04:07:08 crc kubenswrapper[4751]: I1123 04:07:08.114929 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:07:08 crc kubenswrapper[4751]: I1123 04:07:08.115790 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:07:11 crc kubenswrapper[4751]: I1123 04:07:11.531584 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-859f4d786d-lx7n9" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.385479 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-64sx8"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.386142 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.388242 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dgb9d"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.388421 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8wfdm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.389067 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.390163 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.394271 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.394322 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.435734 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-64sx8"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.487936 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-ps8wm"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.488819 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.490708 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.490932 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.490997 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kqlzg" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.492290 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.506302 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kht5"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.507316 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-sockets\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7sg\" (UniqueName: \"kubernetes.io/projected/52ebaa08-f93a-422b-8c95-728f7ad4a20c-kube-api-access-7w7sg\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ebaa08-f93a-422b-8c95-728f7ad4a20c-cert\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512905 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjss2\" (UniqueName: \"kubernetes.io/projected/f2261f62-a80e-45a3-8ab8-b72f43f53d73-kube-api-access-rjss2\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512927 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-startup\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.512987 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-reloader\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.513017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.513043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-conf\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.513418 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.519337 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kht5"] Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7sg\" (UniqueName: \"kubernetes.io/projected/52ebaa08-f93a-422b-8c95-728f7ad4a20c-kube-api-access-7w7sg\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ebaa08-f93a-422b-8c95-728f7ad4a20c-cert\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgkq\" (UniqueName: \"kubernetes.io/projected/0a9bcd23-2927-40fc-be78-28a85fd0c43c-kube-api-access-ljgkq\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjss2\" (UniqueName: \"kubernetes.io/projected/f2261f62-a80e-45a3-8ab8-b72f43f53d73-kube-api-access-rjss2\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-metrics-certs\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-metrics-certs\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614304 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-startup\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614331 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-reloader\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: E1123 04:07:12.614385 4751 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 23 04:07:12 crc kubenswrapper[4751]: E1123 04:07:12.614461 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs podName:f2261f62-a80e-45a3-8ab8-b72f43f53d73 nodeName:}" failed. No retries permitted until 2025-11-23 04:07:13.114438916 +0000 UTC m=+729.308110395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs") pod "frr-k8s-dgb9d" (UID: "f2261f62-a80e-45a3-8ab8-b72f43f53d73") : secret "frr-k8s-certs-secret" not found Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-cert\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-conf\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614641 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4pl\" (UniqueName: \"kubernetes.io/projected/24d322b0-264c-482c-9daa-9ee340079d1f-kube-api-access-kj4pl\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-reloader\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.614992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-conf\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.615132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-sockets\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.615161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/24d322b0-264c-482c-9daa-9ee340079d1f-metallb-excludel2\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.615372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-startup\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.615883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f2261f62-a80e-45a3-8ab8-b72f43f53d73-frr-sockets\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.620949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ebaa08-f93a-422b-8c95-728f7ad4a20c-cert\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.635990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjss2\" (UniqueName: \"kubernetes.io/projected/f2261f62-a80e-45a3-8ab8-b72f43f53d73-kube-api-access-rjss2\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.643994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7sg\" (UniqueName: \"kubernetes.io/projected/52ebaa08-f93a-422b-8c95-728f7ad4a20c-kube-api-access-7w7sg\") pod \"frr-k8s-webhook-server-6998585d5-64sx8\" (UID: \"52ebaa08-f93a-422b-8c95-728f7ad4a20c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.704598 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgkq\" (UniqueName: \"kubernetes.io/projected/0a9bcd23-2927-40fc-be78-28a85fd0c43c-kube-api-access-ljgkq\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-metrics-certs\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-metrics-certs\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716589 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-cert\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4pl\" (UniqueName: \"kubernetes.io/projected/24d322b0-264c-482c-9daa-9ee340079d1f-kube-api-access-kj4pl\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.716662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/24d322b0-264c-482c-9daa-9ee340079d1f-metallb-excludel2\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: E1123 04:07:12.718566 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 04:07:12 crc kubenswrapper[4751]: E1123 04:07:12.718642 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist podName:24d322b0-264c-482c-9daa-9ee340079d1f nodeName:}" failed. No retries permitted until 2025-11-23 04:07:13.218624629 +0000 UTC m=+729.412295978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist") pod "speaker-ps8wm" (UID: "24d322b0-264c-482c-9daa-9ee340079d1f") : secret "metallb-memberlist" not found Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.719919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/24d322b0-264c-482c-9daa-9ee340079d1f-metallb-excludel2\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.724212 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-metrics-certs\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.724305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-metrics-certs\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.738291 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.739401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4pl\" (UniqueName: \"kubernetes.io/projected/24d322b0-264c-482c-9daa-9ee340079d1f-kube-api-access-kj4pl\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.741610 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a9bcd23-2927-40fc-be78-28a85fd0c43c-cert\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.746315 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgkq\" (UniqueName: \"kubernetes.io/projected/0a9bcd23-2927-40fc-be78-28a85fd0c43c-kube-api-access-ljgkq\") pod \"controller-6c7b4b5f48-5kht5\" (UID: \"0a9bcd23-2927-40fc-be78-28a85fd0c43c\") " pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.820469 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:12 crc kubenswrapper[4751]: I1123 04:07:12.997592 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kht5"] Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.109971 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-64sx8"] Nov 23 04:07:13 crc kubenswrapper[4751]: W1123 04:07:13.114940 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ebaa08_f93a_422b_8c95_728f7ad4a20c.slice/crio-01c09e0f9341ef4791701a1ff782cabc68caae6e394b24bdc471d04ba102f8ad WatchSource:0}: Error finding container 01c09e0f9341ef4791701a1ff782cabc68caae6e394b24bdc471d04ba102f8ad: Status 404 returned error can't find the container with id 01c09e0f9341ef4791701a1ff782cabc68caae6e394b24bdc471d04ba102f8ad Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.123093 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.128004 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2261f62-a80e-45a3-8ab8-b72f43f53d73-metrics-certs\") pod \"frr-k8s-dgb9d\" (UID: \"f2261f62-a80e-45a3-8ab8-b72f43f53d73\") " pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.225170 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.229027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/24d322b0-264c-482c-9daa-9ee340079d1f-memberlist\") pod \"speaker-ps8wm\" (UID: \"24d322b0-264c-482c-9daa-9ee340079d1f\") " pod="metallb-system/speaker-ps8wm" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.311429 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.374939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" event={"ID":"52ebaa08-f93a-422b-8c95-728f7ad4a20c","Type":"ContainerStarted","Data":"01c09e0f9341ef4791701a1ff782cabc68caae6e394b24bdc471d04ba102f8ad"} Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.376594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kht5" event={"ID":"0a9bcd23-2927-40fc-be78-28a85fd0c43c","Type":"ContainerStarted","Data":"74d5545a34c9fcc3a7a20a0dd6973f2993ef9cb51dcbfa361b8e67533df877fa"} Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.376626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kht5" event={"ID":"0a9bcd23-2927-40fc-be78-28a85fd0c43c","Type":"ContainerStarted","Data":"f50c4969750fd38324389fd0fd08d0c0832124e16ea4bb40b6955eb8053506c1"} Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.376641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kht5" event={"ID":"0a9bcd23-2927-40fc-be78-28a85fd0c43c","Type":"ContainerStarted","Data":"81305a514d08360c817a2b274e8b1f763e68d4ff77cfeca5c89aa528309da0fd"} Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.376792 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.398274 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-5kht5" podStartSLOduration=1.3982548590000001 podStartE2EDuration="1.398254859s" podCreationTimestamp="2025-11-23 04:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:07:13.396577155 +0000 UTC m=+729.590248524" watchObservedRunningTime="2025-11-23 04:07:13.398254859 +0000 UTC m=+729.591926218" Nov 23 04:07:13 crc kubenswrapper[4751]: I1123 04:07:13.401421 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-ps8wm" Nov 23 04:07:13 crc kubenswrapper[4751]: W1123 04:07:13.419625 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d322b0_264c_482c_9daa_9ee340079d1f.slice/crio-3636807426fe34d243974c29c691ee2958fad3f131f2571a894a70f62803eda4 WatchSource:0}: Error finding container 3636807426fe34d243974c29c691ee2958fad3f131f2571a894a70f62803eda4: Status 404 returned error can't find the container with id 3636807426fe34d243974c29c691ee2958fad3f131f2571a894a70f62803eda4 Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.383881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"d57014134f66d10ffb40a5a23e48d8f8e5ce7afa4d1e2b46f885c6fc28b8ec8e"} Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.386959 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ps8wm" event={"ID":"24d322b0-264c-482c-9daa-9ee340079d1f","Type":"ContainerStarted","Data":"99cd0ed3e48123a93866ff3e2951e8cb570c52cc3695c9bbb724d7ff3feb3ac8"} Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.387041 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ps8wm" event={"ID":"24d322b0-264c-482c-9daa-9ee340079d1f","Type":"ContainerStarted","Data":"08a76cf71013bc74dfe36f323e5a1ddddbac9c5efbb46a480991cbb8e54c8c95"} Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.387065 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-ps8wm" event={"ID":"24d322b0-264c-482c-9daa-9ee340079d1f","Type":"ContainerStarted","Data":"3636807426fe34d243974c29c691ee2958fad3f131f2571a894a70f62803eda4"} Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.387330 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-ps8wm" Nov 23 04:07:14 crc kubenswrapper[4751]: I1123 04:07:14.403192 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-ps8wm" podStartSLOduration=2.403167039 podStartE2EDuration="2.403167039s" podCreationTimestamp="2025-11-23 04:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:07:14.402506682 +0000 UTC m=+730.596178071" watchObservedRunningTime="2025-11-23 04:07:14.403167039 +0000 UTC m=+730.596838408" Nov 23 04:07:21 crc kubenswrapper[4751]: I1123 04:07:21.434635 4751 generic.go:334] "Generic (PLEG): container finished" podID="f2261f62-a80e-45a3-8ab8-b72f43f53d73" containerID="9d2be8978ea1eb7f1ba8dc90cc9160eda3fea764bb4c7e0548057821a2a6abdc" exitCode=0 Nov 23 04:07:21 crc kubenswrapper[4751]: I1123 04:07:21.434847 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerDied","Data":"9d2be8978ea1eb7f1ba8dc90cc9160eda3fea764bb4c7e0548057821a2a6abdc"} Nov 23 04:07:21 crc kubenswrapper[4751]: I1123 04:07:21.440503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" event={"ID":"52ebaa08-f93a-422b-8c95-728f7ad4a20c","Type":"ContainerStarted","Data":"5a2dd126e8b8f572f4941785245aee21ec154bd309d6d16d3e9e049208b80e67"} Nov 23 04:07:21 crc kubenswrapper[4751]: I1123 04:07:21.441170 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:21 crc kubenswrapper[4751]: I1123 04:07:21.510034 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" podStartSLOduration=2.40014087 podStartE2EDuration="9.510005172s" podCreationTimestamp="2025-11-23 04:07:12 +0000 UTC" firstStartedPulling="2025-11-23 04:07:13.118109688 +0000 UTC m=+729.311781057" lastFinishedPulling="2025-11-23 04:07:20.22797399 +0000 UTC m=+736.421645359" observedRunningTime="2025-11-23 04:07:21.499102867 +0000 UTC m=+737.692774266" watchObservedRunningTime="2025-11-23 04:07:21.510005172 +0000 UTC m=+737.703676571" Nov 23 04:07:22 crc kubenswrapper[4751]: I1123 04:07:22.448659 4751 generic.go:334] "Generic (PLEG): container finished" podID="f2261f62-a80e-45a3-8ab8-b72f43f53d73" containerID="c0d08f9072e81a3ba792c9569d861638c09123f7cebde03089f1c5fda1c0a909" exitCode=0 Nov 23 04:07:22 crc kubenswrapper[4751]: I1123 04:07:22.448825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerDied","Data":"c0d08f9072e81a3ba792c9569d861638c09123f7cebde03089f1c5fda1c0a909"} Nov 23 04:07:23 crc kubenswrapper[4751]: I1123 04:07:23.405864 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-ps8wm" Nov 23 04:07:23 crc kubenswrapper[4751]: I1123 04:07:23.458228 4751 generic.go:334] "Generic (PLEG): container finished" podID="f2261f62-a80e-45a3-8ab8-b72f43f53d73" containerID="244a86a538101bca6ab33868680acaca55190c918106c3520ffd3b92668529c1" exitCode=0 Nov 23 04:07:23 crc kubenswrapper[4751]: I1123 04:07:23.458279 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerDied","Data":"244a86a538101bca6ab33868680acaca55190c918106c3520ffd3b92668529c1"} Nov 23 04:07:24 crc kubenswrapper[4751]: I1123 04:07:24.480426 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"43ea7378c6d5a7c9e457946615849ab4d686c0ea94159c67c50ee477e109f468"} Nov 23 04:07:24 crc kubenswrapper[4751]: I1123 04:07:24.480796 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"dd9303cd110935e901074e23ba0178a0b7256cc210c6801b57c008c6f2ec79dc"} Nov 23 04:07:24 crc kubenswrapper[4751]: I1123 04:07:24.480813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"8b739258eb7f72843f8a40f38edbf28de3868db480e223218ade93198f780704"} Nov 23 04:07:24 crc kubenswrapper[4751]: I1123 04:07:24.480827 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"652ce4bfda3c22c7b74e035972af7c1a26e2e8cdc403a76bbaadae927fdcc61a"} Nov 23 04:07:24 crc kubenswrapper[4751]: I1123 04:07:24.480840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"466086ea51bbbc9c4cc211f5acf76643f8126f74242005c199e38ca90c3d1e68"} Nov 23 04:07:25 crc kubenswrapper[4751]: I1123 04:07:25.494336 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dgb9d" event={"ID":"f2261f62-a80e-45a3-8ab8-b72f43f53d73","Type":"ContainerStarted","Data":"d025358ebe911003952249f1a731676d5b29640e4a6eb7fb6bb82a298005ebac"} Nov 23 04:07:25 crc kubenswrapper[4751]: I1123 04:07:25.494667 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:25 crc kubenswrapper[4751]: I1123 04:07:25.529991 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dgb9d" podStartSLOduration=6.704159981 podStartE2EDuration="13.529958329s" podCreationTimestamp="2025-11-23 04:07:12 +0000 UTC" firstStartedPulling="2025-11-23 04:07:13.4300408 +0000 UTC m=+729.623712169" lastFinishedPulling="2025-11-23 04:07:20.255839118 +0000 UTC m=+736.449510517" observedRunningTime="2025-11-23 04:07:25.526448178 +0000 UTC m=+741.720119567" watchObservedRunningTime="2025-11-23 04:07:25.529958329 +0000 UTC m=+741.723629738" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.513100 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.514503 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.516773 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ncbfs" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.516874 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.522798 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.552801 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.637892 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvdm\" (UniqueName: \"kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm\") pod \"openstack-operator-index-5z7k2\" (UID: \"a3ede8cb-cc18-422f-bb46-ce8e07a39538\") " pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.738779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrvdm\" (UniqueName: \"kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm\") pod \"openstack-operator-index-5z7k2\" (UID: \"a3ede8cb-cc18-422f-bb46-ce8e07a39538\") " pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.755510 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrvdm\" (UniqueName: \"kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm\") pod \"openstack-operator-index-5z7k2\" (UID: \"a3ede8cb-cc18-422f-bb46-ce8e07a39538\") " pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:26 crc kubenswrapper[4751]: I1123 04:07:26.847607 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:27 crc kubenswrapper[4751]: I1123 04:07:27.250309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:27 crc kubenswrapper[4751]: I1123 04:07:27.513269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5z7k2" event={"ID":"a3ede8cb-cc18-422f-bb46-ce8e07a39538","Type":"ContainerStarted","Data":"551a467ac58585e52f7d31cbbf67ff0b1adb56ee77c152930261658705eaa8e0"} Nov 23 04:07:28 crc kubenswrapper[4751]: I1123 04:07:28.313032 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:28 crc kubenswrapper[4751]: I1123 04:07:28.375752 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:29 crc kubenswrapper[4751]: I1123 04:07:29.872389 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.480737 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zm8mr"] Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.482160 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.495240 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zm8mr"] Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.659000 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xblb\" (UniqueName: \"kubernetes.io/projected/906bbca3-2aaf-47a8-ba3e-ee004ca911d6-kube-api-access-4xblb\") pod \"openstack-operator-index-zm8mr\" (UID: \"906bbca3-2aaf-47a8-ba3e-ee004ca911d6\") " pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.760876 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xblb\" (UniqueName: \"kubernetes.io/projected/906bbca3-2aaf-47a8-ba3e-ee004ca911d6-kube-api-access-4xblb\") pod \"openstack-operator-index-zm8mr\" (UID: \"906bbca3-2aaf-47a8-ba3e-ee004ca911d6\") " pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.788682 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xblb\" (UniqueName: \"kubernetes.io/projected/906bbca3-2aaf-47a8-ba3e-ee004ca911d6-kube-api-access-4xblb\") pod \"openstack-operator-index-zm8mr\" (UID: \"906bbca3-2aaf-47a8-ba3e-ee004ca911d6\") " pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:30 crc kubenswrapper[4751]: I1123 04:07:30.807788 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.074677 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zm8mr"] Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.546321 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5z7k2" event={"ID":"a3ede8cb-cc18-422f-bb46-ce8e07a39538","Type":"ContainerStarted","Data":"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8"} Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.546557 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5z7k2" podUID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" containerName="registry-server" containerID="cri-o://53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8" gracePeriod=2 Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.548519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zm8mr" event={"ID":"906bbca3-2aaf-47a8-ba3e-ee004ca911d6","Type":"ContainerStarted","Data":"8681b93e4acae5f33f2f831a8402ae10422e9153c725992c4f831a32098108eb"} Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.548572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zm8mr" event={"ID":"906bbca3-2aaf-47a8-ba3e-ee004ca911d6","Type":"ContainerStarted","Data":"7032158d935b784185ec1a1457f63fdb8d4fe21fdc319af436096531b75c5afa"} Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.561549 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5z7k2" podStartSLOduration=1.7851744219999999 podStartE2EDuration="5.561530844s" podCreationTimestamp="2025-11-23 04:07:26 +0000 UTC" firstStartedPulling="2025-11-23 04:07:27.258448437 +0000 UTC m=+743.452119796" lastFinishedPulling="2025-11-23 04:07:31.034804859 +0000 UTC m=+747.228476218" observedRunningTime="2025-11-23 04:07:31.560192849 +0000 UTC m=+747.753864228" watchObservedRunningTime="2025-11-23 04:07:31.561530844 +0000 UTC m=+747.755202213" Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.587395 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zm8mr" podStartSLOduration=1.5374626249999999 podStartE2EDuration="1.587375659s" podCreationTimestamp="2025-11-23 04:07:30 +0000 UTC" firstStartedPulling="2025-11-23 04:07:31.085310719 +0000 UTC m=+747.278982068" lastFinishedPulling="2025-11-23 04:07:31.135223733 +0000 UTC m=+747.328895102" observedRunningTime="2025-11-23 04:07:31.583382095 +0000 UTC m=+747.777053514" watchObservedRunningTime="2025-11-23 04:07:31.587375659 +0000 UTC m=+747.781047028" Nov 23 04:07:31 crc kubenswrapper[4751]: I1123 04:07:31.925359 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.093321 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrvdm\" (UniqueName: \"kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm\") pod \"a3ede8cb-cc18-422f-bb46-ce8e07a39538\" (UID: \"a3ede8cb-cc18-422f-bb46-ce8e07a39538\") " Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.099183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm" (OuterVolumeSpecName: "kube-api-access-xrvdm") pod "a3ede8cb-cc18-422f-bb46-ce8e07a39538" (UID: "a3ede8cb-cc18-422f-bb46-ce8e07a39538"). InnerVolumeSpecName "kube-api-access-xrvdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.194934 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrvdm\" (UniqueName: \"kubernetes.io/projected/a3ede8cb-cc18-422f-bb46-ce8e07a39538-kube-api-access-xrvdm\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.559570 4751 generic.go:334] "Generic (PLEG): container finished" podID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" containerID="53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8" exitCode=0 Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.559683 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5z7k2" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.559753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5z7k2" event={"ID":"a3ede8cb-cc18-422f-bb46-ce8e07a39538","Type":"ContainerDied","Data":"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8"} Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.559846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5z7k2" event={"ID":"a3ede8cb-cc18-422f-bb46-ce8e07a39538","Type":"ContainerDied","Data":"551a467ac58585e52f7d31cbbf67ff0b1adb56ee77c152930261658705eaa8e0"} Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.559883 4751 scope.go:117] "RemoveContainer" containerID="53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.583234 4751 scope.go:117] "RemoveContainer" containerID="53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8" Nov 23 04:07:32 crc kubenswrapper[4751]: E1123 04:07:32.583892 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8\": container with ID starting with 53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8 not found: ID does not exist" containerID="53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.583951 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8"} err="failed to get container status \"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8\": rpc error: code = NotFound desc = could not find container \"53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8\": container with ID starting with 53843c95a0712abe9f7c46fc175ae7ea050608e4539a7e482955eaa0710dffd8 not found: ID does not exist" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.603311 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.610304 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5z7k2"] Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.659959 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" path="/var/lib/kubelet/pods/a3ede8cb-cc18-422f-bb46-ce8e07a39538/volumes" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.715725 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-64sx8" Nov 23 04:07:32 crc kubenswrapper[4751]: I1123 04:07:32.827486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-5kht5" Nov 23 04:07:33 crc kubenswrapper[4751]: I1123 04:07:33.320206 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dgb9d" Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.114839 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.115539 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.115602 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.116500 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.116604 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464" gracePeriod=600 Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.618756 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464" exitCode=0 Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.618879 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464"} Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.619105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01"} Nov 23 04:07:38 crc kubenswrapper[4751]: I1123 04:07:38.619130 4751 scope.go:117] "RemoveContainer" containerID="df0bbfb2499535a0d9716c88e2a6d4d180a4c4bf5b034768201f2f7c48197b2e" Nov 23 04:07:40 crc kubenswrapper[4751]: I1123 04:07:40.809811 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:40 crc kubenswrapper[4751]: I1123 04:07:40.810488 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:40 crc kubenswrapper[4751]: I1123 04:07:40.856862 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.580799 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.581203 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" podUID="ee400172-faca-43a0-8331-fea8b31505db" containerName="controller-manager" containerID="cri-o://679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63" gracePeriod=30 Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.666700 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.667250 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" podUID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" containerName="route-controller-manager" containerID="cri-o://4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877" gracePeriod=30 Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.678703 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zm8mr" Nov 23 04:07:41 crc kubenswrapper[4751]: I1123 04:07:41.970172 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.057691 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.069429 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert\") pod \"ee400172-faca-43a0-8331-fea8b31505db\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.069556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj8cb\" (UniqueName: \"kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb\") pod \"ee400172-faca-43a0-8331-fea8b31505db\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.069587 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca\") pod \"ee400172-faca-43a0-8331-fea8b31505db\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.069622 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles\") pod \"ee400172-faca-43a0-8331-fea8b31505db\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.069658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config\") pod \"ee400172-faca-43a0-8331-fea8b31505db\" (UID: \"ee400172-faca-43a0-8331-fea8b31505db\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.070561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee400172-faca-43a0-8331-fea8b31505db" (UID: "ee400172-faca-43a0-8331-fea8b31505db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.070638 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ee400172-faca-43a0-8331-fea8b31505db" (UID: "ee400172-faca-43a0-8331-fea8b31505db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.070673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config" (OuterVolumeSpecName: "config") pod "ee400172-faca-43a0-8331-fea8b31505db" (UID: "ee400172-faca-43a0-8331-fea8b31505db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.076666 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee400172-faca-43a0-8331-fea8b31505db" (UID: "ee400172-faca-43a0-8331-fea8b31505db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.077707 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb" (OuterVolumeSpecName: "kube-api-access-dj8cb") pod "ee400172-faca-43a0-8331-fea8b31505db" (UID: "ee400172-faca-43a0-8331-fea8b31505db"). InnerVolumeSpecName "kube-api-access-dj8cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.171390 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert\") pod \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.171640 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hzd\" (UniqueName: \"kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd\") pod \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.171685 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config\") pod \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.171715 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca\") pod \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\" (UID: \"2790e1c8-65f5-42b1-afca-0f755fdd0f33\") " Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.172163 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.172190 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.172203 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee400172-faca-43a0-8331-fea8b31505db-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.172215 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj8cb\" (UniqueName: \"kubernetes.io/projected/ee400172-faca-43a0-8331-fea8b31505db-kube-api-access-dj8cb\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.172228 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee400172-faca-43a0-8331-fea8b31505db-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.173518 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config" (OuterVolumeSpecName: "config") pod "2790e1c8-65f5-42b1-afca-0f755fdd0f33" (UID: "2790e1c8-65f5-42b1-afca-0f755fdd0f33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.173722 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca" (OuterVolumeSpecName: "client-ca") pod "2790e1c8-65f5-42b1-afca-0f755fdd0f33" (UID: "2790e1c8-65f5-42b1-afca-0f755fdd0f33"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.176384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd" (OuterVolumeSpecName: "kube-api-access-v2hzd") pod "2790e1c8-65f5-42b1-afca-0f755fdd0f33" (UID: "2790e1c8-65f5-42b1-afca-0f755fdd0f33"). InnerVolumeSpecName "kube-api-access-v2hzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.177159 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2790e1c8-65f5-42b1-afca-0f755fdd0f33" (UID: "2790e1c8-65f5-42b1-afca-0f755fdd0f33"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.273849 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2790e1c8-65f5-42b1-afca-0f755fdd0f33-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.273893 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hzd\" (UniqueName: \"kubernetes.io/projected/2790e1c8-65f5-42b1-afca-0f755fdd0f33-kube-api-access-v2hzd\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.273914 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.273928 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2790e1c8-65f5-42b1-afca-0f755fdd0f33-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.654931 4751 generic.go:334] "Generic (PLEG): container finished" podID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" containerID="4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877" exitCode=0 Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.655026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" event={"ID":"2790e1c8-65f5-42b1-afca-0f755fdd0f33","Type":"ContainerDied","Data":"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877"} Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.655066 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" event={"ID":"2790e1c8-65f5-42b1-afca-0f755fdd0f33","Type":"ContainerDied","Data":"b35d341f3b52b4706736a0159ea64b1eb2588a4a4827a5f18f26769efd198ac4"} Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.655093 4751 scope.go:117] "RemoveContainer" containerID="4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.655255 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.661457 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee400172-faca-43a0-8331-fea8b31505db" containerID="679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63" exitCode=0 Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.661696 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.661699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" event={"ID":"ee400172-faca-43a0-8331-fea8b31505db","Type":"ContainerDied","Data":"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63"} Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.661756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pvr4q" event={"ID":"ee400172-faca-43a0-8331-fea8b31505db","Type":"ContainerDied","Data":"570a643902743aaefb89472fe25712eafd6091b3895548a6e4d7e6a1adede02e"} Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.686225 4751 scope.go:117] "RemoveContainer" containerID="4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877" Nov 23 04:07:42 crc kubenswrapper[4751]: E1123 04:07:42.686786 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877\": container with ID starting with 4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877 not found: ID does not exist" containerID="4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.686840 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877"} err="failed to get container status \"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877\": rpc error: code = NotFound desc = could not find container \"4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877\": container with ID starting with 4f1ef30364018f0f8e4d6fcbd6cf60056d75039616ce0967d07910e843c7b877 not found: ID does not exist" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.686875 4751 scope.go:117] "RemoveContainer" containerID="679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.711900 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.718339 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pvr4q"] Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.757285 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.760029 4751 scope.go:117] "RemoveContainer" containerID="679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63" Nov 23 04:07:42 crc kubenswrapper[4751]: E1123 04:07:42.761152 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63\": container with ID starting with 679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63 not found: ID does not exist" containerID="679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.761217 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63"} err="failed to get container status \"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63\": rpc error: code = NotFound desc = could not find container \"679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63\": container with ID starting with 679529170bb563a69579c54834fa254939df45a1a16ce5baacf04022227e6f63 not found: ID does not exist" Nov 23 04:07:42 crc kubenswrapper[4751]: I1123 04:07:42.767005 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpvjg"] Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.786511 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d7495955d-wj8v7"] Nov 23 04:07:43 crc kubenswrapper[4751]: E1123 04:07:43.786773 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee400172-faca-43a0-8331-fea8b31505db" containerName="controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.786788 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee400172-faca-43a0-8331-fea8b31505db" containerName="controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: E1123 04:07:43.786815 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" containerName="registry-server" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.786824 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" containerName="registry-server" Nov 23 04:07:43 crc kubenswrapper[4751]: E1123 04:07:43.786836 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" containerName="route-controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.786846 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" containerName="route-controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.786994 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee400172-faca-43a0-8331-fea8b31505db" containerName="controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.787011 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" containerName="route-controller-manager" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.787021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ede8cb-cc18-422f-bb46-ce8e07a39538" containerName="registry-server" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.787491 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.790244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.791189 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.792823 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.793454 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.794672 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.794828 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh"] Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.796052 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.800045 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.800682 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.800755 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.800816 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.801109 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.801252 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d7495955d-wj8v7"] Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.801558 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.802855 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.804270 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.810890 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh"] Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.899055 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-config\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.899390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-client-ca\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.899553 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb122aa-2bbf-4107-87d6-e7d7d6148139-serving-cert\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.899784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kwx\" (UniqueName: \"kubernetes.io/projected/3d5e208c-f33d-47e4-a44c-06e612c466a1-kube-api-access-q8kwx\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.900024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-client-ca\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.900492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-proxy-ca-bundles\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.900739 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-config\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.900974 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4pt\" (UniqueName: \"kubernetes.io/projected/9cb122aa-2bbf-4107-87d6-e7d7d6148139-kube-api-access-nl4pt\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:43 crc kubenswrapper[4751]: I1123 04:07:43.901185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5e208c-f33d-47e4-a44c-06e612c466a1-serving-cert\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.002313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-client-ca\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.002714 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-proxy-ca-bundles\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.002879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-config\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4pt\" (UniqueName: \"kubernetes.io/projected/9cb122aa-2bbf-4107-87d6-e7d7d6148139-kube-api-access-nl4pt\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5e208c-f33d-47e4-a44c-06e612c466a1-serving-cert\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003279 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-config\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-client-ca\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb122aa-2bbf-4107-87d6-e7d7d6148139-serving-cert\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.003794 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kwx\" (UniqueName: \"kubernetes.io/projected/3d5e208c-f33d-47e4-a44c-06e612c466a1-kube-api-access-q8kwx\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.004038 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-client-ca\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.004073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5e208c-f33d-47e4-a44c-06e612c466a1-config\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.004706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-proxy-ca-bundles\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.004952 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-client-ca\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.006964 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb122aa-2bbf-4107-87d6-e7d7d6148139-config\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.018590 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d5e208c-f33d-47e4-a44c-06e612c466a1-serving-cert\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.018597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb122aa-2bbf-4107-87d6-e7d7d6148139-serving-cert\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.026784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kwx\" (UniqueName: \"kubernetes.io/projected/3d5e208c-f33d-47e4-a44c-06e612c466a1-kube-api-access-q8kwx\") pod \"route-controller-manager-549f7bc47d-57kwh\" (UID: \"3d5e208c-f33d-47e4-a44c-06e612c466a1\") " pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.033565 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4pt\" (UniqueName: \"kubernetes.io/projected/9cb122aa-2bbf-4107-87d6-e7d7d6148139-kube-api-access-nl4pt\") pod \"controller-manager-5d7495955d-wj8v7\" (UID: \"9cb122aa-2bbf-4107-87d6-e7d7d6148139\") " pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.117952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.132685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.407427 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d7495955d-wj8v7"] Nov 23 04:07:44 crc kubenswrapper[4751]: W1123 04:07:44.416954 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb122aa_2bbf_4107_87d6_e7d7d6148139.slice/crio-99156bbfa7449a7ac14c9d9e357e2b85e86cc26e1dc5dd7cb8abbf822b78b0f9 WatchSource:0}: Error finding container 99156bbfa7449a7ac14c9d9e357e2b85e86cc26e1dc5dd7cb8abbf822b78b0f9: Status 404 returned error can't find the container with id 99156bbfa7449a7ac14c9d9e357e2b85e86cc26e1dc5dd7cb8abbf822b78b0f9 Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.601185 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh"] Nov 23 04:07:44 crc kubenswrapper[4751]: W1123 04:07:44.616488 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d5e208c_f33d_47e4_a44c_06e612c466a1.slice/crio-6d405b2e90cb01f95db7c0e401aa0ebc115d274b730b394737e6409e0d7068f6 WatchSource:0}: Error finding container 6d405b2e90cb01f95db7c0e401aa0ebc115d274b730b394737e6409e0d7068f6: Status 404 returned error can't find the container with id 6d405b2e90cb01f95db7c0e401aa0ebc115d274b730b394737e6409e0d7068f6 Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.658761 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2790e1c8-65f5-42b1-afca-0f755fdd0f33" path="/var/lib/kubelet/pods/2790e1c8-65f5-42b1-afca-0f755fdd0f33/volumes" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.660166 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee400172-faca-43a0-8331-fea8b31505db" path="/var/lib/kubelet/pods/ee400172-faca-43a0-8331-fea8b31505db/volumes" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.698741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" event={"ID":"9cb122aa-2bbf-4107-87d6-e7d7d6148139","Type":"ContainerStarted","Data":"06bd9f1aa2067e73fb894aeefa918b89dd146a81164a50c8664b77e6dbfd461e"} Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.698803 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" event={"ID":"9cb122aa-2bbf-4107-87d6-e7d7d6148139","Type":"ContainerStarted","Data":"99156bbfa7449a7ac14c9d9e357e2b85e86cc26e1dc5dd7cb8abbf822b78b0f9"} Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.700249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.703672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" event={"ID":"3d5e208c-f33d-47e4-a44c-06e612c466a1","Type":"ContainerStarted","Data":"6d405b2e90cb01f95db7c0e401aa0ebc115d274b730b394737e6409e0d7068f6"} Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.703677 4751 patch_prober.go:28] interesting pod/controller-manager-5d7495955d-wj8v7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.703798 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" podUID="9cb122aa-2bbf-4107-87d6-e7d7d6148139" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Nov 23 04:07:44 crc kubenswrapper[4751]: I1123 04:07:44.726952 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" podStartSLOduration=3.726925989 podStartE2EDuration="3.726925989s" podCreationTimestamp="2025-11-23 04:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:07:44.725174474 +0000 UTC m=+760.918845833" watchObservedRunningTime="2025-11-23 04:07:44.726925989 +0000 UTC m=+760.920597378" Nov 23 04:07:45 crc kubenswrapper[4751]: I1123 04:07:45.715234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" event={"ID":"3d5e208c-f33d-47e4-a44c-06e612c466a1","Type":"ContainerStarted","Data":"2f73c253a5732c8139e9a2e7c4e2cf0f0575b33256483a9a9fe58b630cec9d8f"} Nov 23 04:07:45 crc kubenswrapper[4751]: I1123 04:07:45.721061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d7495955d-wj8v7" Nov 23 04:07:45 crc kubenswrapper[4751]: I1123 04:07:45.741550 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" podStartSLOduration=4.741535132 podStartE2EDuration="4.741535132s" podCreationTimestamp="2025-11-23 04:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:07:45.738628646 +0000 UTC m=+761.932300045" watchObservedRunningTime="2025-11-23 04:07:45.741535132 +0000 UTC m=+761.935206491" Nov 23 04:07:46 crc kubenswrapper[4751]: I1123 04:07:46.724618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:46 crc kubenswrapper[4751]: I1123 04:07:46.733419 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-549f7bc47d-57kwh" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.779335 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk"] Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.780959 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.783677 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-5j88t" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.794654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk"] Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.868224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.868270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.868316 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bshr5\" (UniqueName: \"kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.970563 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.970650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.970742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bshr5\" (UniqueName: \"kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.971043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:48 crc kubenswrapper[4751]: I1123 04:07:48.971268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.012582 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bshr5\" (UniqueName: \"kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5\") pod \"6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.097670 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.349957 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.584018 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk"] Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.746659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerStarted","Data":"4579c711770aa03767fb3c1b154ec4957320cd29a441826586d8d88716fa83dc"} Nov 23 04:07:49 crc kubenswrapper[4751]: I1123 04:07:49.746920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerStarted","Data":"ac20f63a26799740a11a3ee8366cdbdee2583b0d5f550db06b0913560900f150"} Nov 23 04:07:50 crc kubenswrapper[4751]: I1123 04:07:50.766993 4751 generic.go:334] "Generic (PLEG): container finished" podID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerID="4579c711770aa03767fb3c1b154ec4957320cd29a441826586d8d88716fa83dc" exitCode=0 Nov 23 04:07:50 crc kubenswrapper[4751]: I1123 04:07:50.767046 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerDied","Data":"4579c711770aa03767fb3c1b154ec4957320cd29a441826586d8d88716fa83dc"} Nov 23 04:07:51 crc kubenswrapper[4751]: I1123 04:07:51.779621 4751 generic.go:334] "Generic (PLEG): container finished" podID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerID="56b5842c50b967f285a697c36bc4a2b2ee2f640ca8aca4767181246a52dc0da1" exitCode=0 Nov 23 04:07:51 crc kubenswrapper[4751]: I1123 04:07:51.779805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerDied","Data":"56b5842c50b967f285a697c36bc4a2b2ee2f640ca8aca4767181246a52dc0da1"} Nov 23 04:07:52 crc kubenswrapper[4751]: I1123 04:07:52.798162 4751 generic.go:334] "Generic (PLEG): container finished" podID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerID="81415fabebbada24c6026451040fe0f588a9968a1448a792ef18ce70c0b180f1" exitCode=0 Nov 23 04:07:52 crc kubenswrapper[4751]: I1123 04:07:52.798226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerDied","Data":"81415fabebbada24c6026451040fe0f588a9968a1448a792ef18ce70c0b180f1"} Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.257925 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.356024 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bshr5\" (UniqueName: \"kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5\") pod \"253da625-d87e-4a1c-823d-dd201b0fc1bf\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.356105 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util\") pod \"253da625-d87e-4a1c-823d-dd201b0fc1bf\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.356383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle\") pod \"253da625-d87e-4a1c-823d-dd201b0fc1bf\" (UID: \"253da625-d87e-4a1c-823d-dd201b0fc1bf\") " Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.357643 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle" (OuterVolumeSpecName: "bundle") pod "253da625-d87e-4a1c-823d-dd201b0fc1bf" (UID: "253da625-d87e-4a1c-823d-dd201b0fc1bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.365563 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5" (OuterVolumeSpecName: "kube-api-access-bshr5") pod "253da625-d87e-4a1c-823d-dd201b0fc1bf" (UID: "253da625-d87e-4a1c-823d-dd201b0fc1bf"). InnerVolumeSpecName "kube-api-access-bshr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.384876 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util" (OuterVolumeSpecName: "util") pod "253da625-d87e-4a1c-823d-dd201b0fc1bf" (UID: "253da625-d87e-4a1c-823d-dd201b0fc1bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.458477 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.458529 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bshr5\" (UniqueName: \"kubernetes.io/projected/253da625-d87e-4a1c-823d-dd201b0fc1bf-kube-api-access-bshr5\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.458551 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/253da625-d87e-4a1c-823d-dd201b0fc1bf-util\") on node \"crc\" DevicePath \"\"" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.819460 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" event={"ID":"253da625-d87e-4a1c-823d-dd201b0fc1bf","Type":"ContainerDied","Data":"ac20f63a26799740a11a3ee8366cdbdee2583b0d5f550db06b0913560900f150"} Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.819789 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac20f63a26799740a11a3ee8366cdbdee2583b0d5f550db06b0913560900f150" Nov 23 04:07:54 crc kubenswrapper[4751]: I1123 04:07:54.819561 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.286112 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr"] Nov 23 04:08:01 crc kubenswrapper[4751]: E1123 04:08:01.287180 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="extract" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.287203 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="extract" Nov 23 04:08:01 crc kubenswrapper[4751]: E1123 04:08:01.287224 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="util" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.287235 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="util" Nov 23 04:08:01 crc kubenswrapper[4751]: E1123 04:08:01.287264 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="pull" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.287277 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="pull" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.287499 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="253da625-d87e-4a1c-823d-dd201b0fc1bf" containerName="extract" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.288902 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.301474 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-c7842" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.321100 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr"] Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.366971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5p7\" (UniqueName: \"kubernetes.io/projected/292f5bac-dc69-4084-814f-509540c16426-kube-api-access-mt5p7\") pod \"openstack-operator-controller-operator-549d6967c7-krhsr\" (UID: \"292f5bac-dc69-4084-814f-509540c16426\") " pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.467965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5p7\" (UniqueName: \"kubernetes.io/projected/292f5bac-dc69-4084-814f-509540c16426-kube-api-access-mt5p7\") pod \"openstack-operator-controller-operator-549d6967c7-krhsr\" (UID: \"292f5bac-dc69-4084-814f-509540c16426\") " pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.490187 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5p7\" (UniqueName: \"kubernetes.io/projected/292f5bac-dc69-4084-814f-509540c16426-kube-api-access-mt5p7\") pod \"openstack-operator-controller-operator-549d6967c7-krhsr\" (UID: \"292f5bac-dc69-4084-814f-509540c16426\") " pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:01 crc kubenswrapper[4751]: I1123 04:08:01.626749 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:02 crc kubenswrapper[4751]: I1123 04:08:02.115209 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr"] Nov 23 04:08:02 crc kubenswrapper[4751]: I1123 04:08:02.884429 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" event={"ID":"292f5bac-dc69-4084-814f-509540c16426","Type":"ContainerStarted","Data":"d3454a19e5b941a7665da9be170bd88c23fb23a4db9f29229a0193880a80b76a"} Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.476444 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.478606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.480018 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.612780 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4rg\" (UniqueName: \"kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.612853 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.612908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.714640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4rg\" (UniqueName: \"kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.714749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.714837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.715901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.716696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.755080 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4rg\" (UniqueName: \"kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg\") pod \"certified-operators-z89x5\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:04 crc kubenswrapper[4751]: I1123 04:08:04.807189 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:06 crc kubenswrapper[4751]: I1123 04:08:06.702613 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:06 crc kubenswrapper[4751]: I1123 04:08:06.914009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerStarted","Data":"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc"} Nov 23 04:08:06 crc kubenswrapper[4751]: I1123 04:08:06.914531 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerStarted","Data":"8565d62ba378208a1ba0c43a15a91d2f4a4263fa6129e704d92121eea5abe4d1"} Nov 23 04:08:06 crc kubenswrapper[4751]: I1123 04:08:06.915324 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" event={"ID":"292f5bac-dc69-4084-814f-509540c16426","Type":"ContainerStarted","Data":"bb3ff07b3077b0d04334c49ca5e3896d3f93d8692526bbf8487e5f0f79f3db51"} Nov 23 04:08:07 crc kubenswrapper[4751]: I1123 04:08:07.927752 4751 generic.go:334] "Generic (PLEG): container finished" podID="50bca43a-ce76-4033-b4df-bc6815815538" containerID="f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc" exitCode=0 Nov 23 04:08:07 crc kubenswrapper[4751]: I1123 04:08:07.927806 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerDied","Data":"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc"} Nov 23 04:08:09 crc kubenswrapper[4751]: I1123 04:08:09.949174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" event={"ID":"292f5bac-dc69-4084-814f-509540c16426","Type":"ContainerStarted","Data":"89b9dd99ffe193c4271b560b5330a2ce93d15b9e1d46f4719de830845e925d96"} Nov 23 04:08:09 crc kubenswrapper[4751]: I1123 04:08:09.950654 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:09 crc kubenswrapper[4751]: I1123 04:08:09.952312 4751 generic.go:334] "Generic (PLEG): container finished" podID="50bca43a-ce76-4033-b4df-bc6815815538" containerID="f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38" exitCode=0 Nov 23 04:08:09 crc kubenswrapper[4751]: I1123 04:08:09.952398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerDied","Data":"f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38"} Nov 23 04:08:10 crc kubenswrapper[4751]: I1123 04:08:10.005731 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" podStartSLOduration=2.227723935 podStartE2EDuration="9.005707213s" podCreationTimestamp="2025-11-23 04:08:01 +0000 UTC" firstStartedPulling="2025-11-23 04:08:02.121417435 +0000 UTC m=+778.315088804" lastFinishedPulling="2025-11-23 04:08:08.899400723 +0000 UTC m=+785.093072082" observedRunningTime="2025-11-23 04:08:09.998281399 +0000 UTC m=+786.191952798" watchObservedRunningTime="2025-11-23 04:08:10.005707213 +0000 UTC m=+786.199378602" Nov 23 04:08:10 crc kubenswrapper[4751]: I1123 04:08:10.962200 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerStarted","Data":"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6"} Nov 23 04:08:10 crc kubenswrapper[4751]: I1123 04:08:10.985541 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z89x5" podStartSLOduration=4.644219616 podStartE2EDuration="6.985523618s" podCreationTimestamp="2025-11-23 04:08:04 +0000 UTC" firstStartedPulling="2025-11-23 04:08:07.99920304 +0000 UTC m=+784.192874399" lastFinishedPulling="2025-11-23 04:08:10.340507012 +0000 UTC m=+786.534178401" observedRunningTime="2025-11-23 04:08:10.984215824 +0000 UTC m=+787.177887193" watchObservedRunningTime="2025-11-23 04:08:10.985523618 +0000 UTC m=+787.179194987" Nov 23 04:08:11 crc kubenswrapper[4751]: I1123 04:08:11.630129 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-549d6967c7-krhsr" Nov 23 04:08:14 crc kubenswrapper[4751]: I1123 04:08:14.808563 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:14 crc kubenswrapper[4751]: I1123 04:08:14.811283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:14 crc kubenswrapper[4751]: I1123 04:08:14.883734 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:16 crc kubenswrapper[4751]: I1123 04:08:16.072873 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:17 crc kubenswrapper[4751]: I1123 04:08:17.250773 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.014509 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z89x5" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="registry-server" containerID="cri-o://31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6" gracePeriod=2 Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.526636 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.635248 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities\") pod \"50bca43a-ce76-4033-b4df-bc6815815538\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.635316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content\") pod \"50bca43a-ce76-4033-b4df-bc6815815538\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.635405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q4rg\" (UniqueName: \"kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg\") pod \"50bca43a-ce76-4033-b4df-bc6815815538\" (UID: \"50bca43a-ce76-4033-b4df-bc6815815538\") " Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.636392 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities" (OuterVolumeSpecName: "utilities") pod "50bca43a-ce76-4033-b4df-bc6815815538" (UID: "50bca43a-ce76-4033-b4df-bc6815815538"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.660534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg" (OuterVolumeSpecName: "kube-api-access-8q4rg") pod "50bca43a-ce76-4033-b4df-bc6815815538" (UID: "50bca43a-ce76-4033-b4df-bc6815815538"). InnerVolumeSpecName "kube-api-access-8q4rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.697297 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50bca43a-ce76-4033-b4df-bc6815815538" (UID: "50bca43a-ce76-4033-b4df-bc6815815538"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.736604 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.736886 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50bca43a-ce76-4033-b4df-bc6815815538-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:08:18 crc kubenswrapper[4751]: I1123 04:08:18.736898 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q4rg\" (UniqueName: \"kubernetes.io/projected/50bca43a-ce76-4033-b4df-bc6815815538-kube-api-access-8q4rg\") on node \"crc\" DevicePath \"\"" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.028174 4751 generic.go:334] "Generic (PLEG): container finished" podID="50bca43a-ce76-4033-b4df-bc6815815538" containerID="31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6" exitCode=0 Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.028238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerDied","Data":"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6"} Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.028281 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z89x5" event={"ID":"50bca43a-ce76-4033-b4df-bc6815815538","Type":"ContainerDied","Data":"8565d62ba378208a1ba0c43a15a91d2f4a4263fa6129e704d92121eea5abe4d1"} Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.028278 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z89x5" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.028368 4751 scope.go:117] "RemoveContainer" containerID="31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.057701 4751 scope.go:117] "RemoveContainer" containerID="f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.086830 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.093820 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z89x5"] Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.098815 4751 scope.go:117] "RemoveContainer" containerID="f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.126461 4751 scope.go:117] "RemoveContainer" containerID="31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6" Nov 23 04:08:19 crc kubenswrapper[4751]: E1123 04:08:19.126973 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6\": container with ID starting with 31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6 not found: ID does not exist" containerID="31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.127013 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6"} err="failed to get container status \"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6\": rpc error: code = NotFound desc = could not find container \"31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6\": container with ID starting with 31f7cbbfa4a65209ba2733a63ac5db8fb54489e6bd924128d9461939d8eda8a6 not found: ID does not exist" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.127038 4751 scope.go:117] "RemoveContainer" containerID="f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38" Nov 23 04:08:19 crc kubenswrapper[4751]: E1123 04:08:19.127385 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38\": container with ID starting with f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38 not found: ID does not exist" containerID="f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.127415 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38"} err="failed to get container status \"f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38\": rpc error: code = NotFound desc = could not find container \"f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38\": container with ID starting with f92919368930b4b7d5fc454e3aa1a9ba62aca46a1bc1cc95cea399cdd2f47d38 not found: ID does not exist" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.127434 4751 scope.go:117] "RemoveContainer" containerID="f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc" Nov 23 04:08:19 crc kubenswrapper[4751]: E1123 04:08:19.127740 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc\": container with ID starting with f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc not found: ID does not exist" containerID="f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc" Nov 23 04:08:19 crc kubenswrapper[4751]: I1123 04:08:19.127764 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc"} err="failed to get container status \"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc\": rpc error: code = NotFound desc = could not find container \"f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc\": container with ID starting with f419d5cc9316ec5c2184a92e43758a62d2bd64cddbb7de43f4cbe3b4fb557bbc not found: ID does not exist" Nov 23 04:08:20 crc kubenswrapper[4751]: I1123 04:08:20.657113 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50bca43a-ce76-4033-b4df-bc6815815538" path="/var/lib/kubelet/pods/50bca43a-ce76-4033-b4df-bc6815815538/volumes" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.265076 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:08:38 crc kubenswrapper[4751]: E1123 04:08:38.266030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="registry-server" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.266047 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="registry-server" Nov 23 04:08:38 crc kubenswrapper[4751]: E1123 04:08:38.266082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="extract-utilities" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.266092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="extract-utilities" Nov 23 04:08:38 crc kubenswrapper[4751]: E1123 04:08:38.266109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="extract-content" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.266119 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="extract-content" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.266439 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="50bca43a-ce76-4033-b4df-bc6815815538" containerName="registry-server" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.268510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.285941 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.313729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.313799 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.313831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdq6\" (UniqueName: \"kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.415489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.415558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.415592 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdq6\" (UniqueName: \"kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.416385 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.416760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.436721 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdq6\" (UniqueName: \"kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6\") pod \"community-operators-whsps\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.585148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:38 crc kubenswrapper[4751]: I1123 04:08:38.871356 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:08:39 crc kubenswrapper[4751]: I1123 04:08:39.193355 4751 generic.go:334] "Generic (PLEG): container finished" podID="db35e685-b57a-44ab-af2a-03d56b81895b" containerID="ded27334350433c097c96b7bdd0373759721519f104b84d74c2bed50ecb18eb0" exitCode=0 Nov 23 04:08:39 crc kubenswrapper[4751]: I1123 04:08:39.193453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerDied","Data":"ded27334350433c097c96b7bdd0373759721519f104b84d74c2bed50ecb18eb0"} Nov 23 04:08:39 crc kubenswrapper[4751]: I1123 04:08:39.193584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerStarted","Data":"6eb82cedbff23d028696ee3bccef90613b579207304ec375806b255c6989d0e6"} Nov 23 04:08:40 crc kubenswrapper[4751]: I1123 04:08:40.202533 4751 generic.go:334] "Generic (PLEG): container finished" podID="db35e685-b57a-44ab-af2a-03d56b81895b" containerID="171c6702b95e9de0fae80e4cdbe6c3d6d365135fd6ba08e6150845cbe8648218" exitCode=0 Nov 23 04:08:40 crc kubenswrapper[4751]: I1123 04:08:40.202613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerDied","Data":"171c6702b95e9de0fae80e4cdbe6c3d6d365135fd6ba08e6150845cbe8648218"} Nov 23 04:08:41 crc kubenswrapper[4751]: I1123 04:08:41.210470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerStarted","Data":"e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6"} Nov 23 04:08:41 crc kubenswrapper[4751]: I1123 04:08:41.231366 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whsps" podStartSLOduration=1.593245403 podStartE2EDuration="3.231329429s" podCreationTimestamp="2025-11-23 04:08:38 +0000 UTC" firstStartedPulling="2025-11-23 04:08:39.204553566 +0000 UTC m=+815.398224965" lastFinishedPulling="2025-11-23 04:08:40.842637602 +0000 UTC m=+817.036308991" observedRunningTime="2025-11-23 04:08:41.229384128 +0000 UTC m=+817.423055507" watchObservedRunningTime="2025-11-23 04:08:41.231329429 +0000 UTC m=+817.425000788" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.258454 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.260151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.271927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.393492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.393571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.393649 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv9m2\" (UniqueName: \"kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.495189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv9m2\" (UniqueName: \"kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.495252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.495302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.495729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.495756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.518298 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv9m2\" (UniqueName: \"kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2\") pod \"redhat-marketplace-whr9b\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.600312 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:43 crc kubenswrapper[4751]: I1123 04:08:43.833927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:08:43 crc kubenswrapper[4751]: W1123 04:08:43.838556 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefa09fe3_1c6b_439b_b717_e76800f469f7.slice/crio-4bd8bbe5a1a1f4b802e2285a21d397d1d9c30bd5ba8b38c0341ee01c1a60fec9 WatchSource:0}: Error finding container 4bd8bbe5a1a1f4b802e2285a21d397d1d9c30bd5ba8b38c0341ee01c1a60fec9: Status 404 returned error can't find the container with id 4bd8bbe5a1a1f4b802e2285a21d397d1d9c30bd5ba8b38c0341ee01c1a60fec9 Nov 23 04:08:44 crc kubenswrapper[4751]: I1123 04:08:44.229764 4751 generic.go:334] "Generic (PLEG): container finished" podID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerID="73dc81ae0eb4487583c1e69dcf6ece3d7a380df3601947fc47be432e8cf845af" exitCode=0 Nov 23 04:08:44 crc kubenswrapper[4751]: I1123 04:08:44.229801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerDied","Data":"73dc81ae0eb4487583c1e69dcf6ece3d7a380df3601947fc47be432e8cf845af"} Nov 23 04:08:44 crc kubenswrapper[4751]: I1123 04:08:44.229825 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerStarted","Data":"4bd8bbe5a1a1f4b802e2285a21d397d1d9c30bd5ba8b38c0341ee01c1a60fec9"} Nov 23 04:08:45 crc kubenswrapper[4751]: I1123 04:08:45.239170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerStarted","Data":"08134b0bdb68728b75bb6f94381831415870bed25e17b428891001c73356ffdd"} Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.248003 4751 generic.go:334] "Generic (PLEG): container finished" podID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerID="08134b0bdb68728b75bb6f94381831415870bed25e17b428891001c73356ffdd" exitCode=0 Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.248133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerDied","Data":"08134b0bdb68728b75bb6f94381831415870bed25e17b428891001c73356ffdd"} Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.729938 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.731168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.735084 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-shb9g" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.749871 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.751141 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.752683 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qw9dz" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.760943 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.770078 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.781371 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.782419 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.784980 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7mq8x" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.798551 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.799882 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.804493 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xkvcc" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.805407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.813047 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.836449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jlld\" (UniqueName: \"kubernetes.io/projected/18a8d35c-05ad-4057-b88e-e5f0d417678f-kube-api-access-9jlld\") pod \"barbican-operator-controller-manager-75fb479bcc-5jtfh\" (UID: \"18a8d35c-05ad-4057-b88e-e5f0d417678f\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.836565 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbppn\" (UniqueName: \"kubernetes.io/projected/9762a11c-fd59-489a-9a95-7725f4d1c9e4-kube-api-access-rbppn\") pod \"cinder-operator-controller-manager-6498cbf48f-5hn9n\" (UID: \"9762a11c-fd59-489a-9a95-7725f4d1c9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.836646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9fn\" (UniqueName: \"kubernetes.io/projected/5323f8b0-18ba-42eb-9a73-ee25c2592aea-kube-api-access-bj9fn\") pod \"designate-operator-controller-manager-767ccfd65f-vrm2h\" (UID: \"5323f8b0-18ba-42eb-9a73-ee25c2592aea\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.842446 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.843499 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.847159 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gxhzb" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.851299 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.852369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.859186 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.862070 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9lbsd" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.865881 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.879589 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.880891 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.888586 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.888934 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q2rf8" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.889235 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.889589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.891314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.892559 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hkm78" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.907043 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.923683 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h6524"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.924726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.927746 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ftpzx" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.936809 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h6524"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937416 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbppn\" (UniqueName: \"kubernetes.io/projected/9762a11c-fd59-489a-9a95-7725f4d1c9e4-kube-api-access-rbppn\") pod \"cinder-operator-controller-manager-6498cbf48f-5hn9n\" (UID: \"9762a11c-fd59-489a-9a95-7725f4d1c9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jrw\" (UniqueName: \"kubernetes.io/projected/934b92b0-4c8a-48d8-8514-ffe3d566b58d-kube-api-access-q8jrw\") pod \"glance-operator-controller-manager-7969689c84-hlrq4\" (UID: \"934b92b0-4c8a-48d8-8514-ffe3d566b58d\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv98p\" (UniqueName: \"kubernetes.io/projected/3ce24a56-0cc8-4f63-91da-dde87342529b-kube-api-access-qv98p\") pod \"heat-operator-controller-manager-56f54d6746-wh2rd\" (UID: \"3ce24a56-0cc8-4f63-91da-dde87342529b\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937508 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skc8c\" (UniqueName: \"kubernetes.io/projected/59fa9d8a-cc64-478a-be71-fda41132aca9-kube-api-access-skc8c\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4mp\" (UniqueName: \"kubernetes.io/projected/8e942272-3be5-4bec-b764-ab18709fbb4d-kube-api-access-2c4mp\") pod \"ironic-operator-controller-manager-99b499f4-7pqmh\" (UID: \"8e942272-3be5-4bec-b764-ab18709fbb4d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9fn\" (UniqueName: \"kubernetes.io/projected/5323f8b0-18ba-42eb-9a73-ee25c2592aea-kube-api-access-bj9fn\") pod \"designate-operator-controller-manager-767ccfd65f-vrm2h\" (UID: \"5323f8b0-18ba-42eb-9a73-ee25c2592aea\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jlld\" (UniqueName: \"kubernetes.io/projected/18a8d35c-05ad-4057-b88e-e5f0d417678f-kube-api-access-9jlld\") pod \"barbican-operator-controller-manager-75fb479bcc-5jtfh\" (UID: \"18a8d35c-05ad-4057-b88e-e5f0d417678f\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.937620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksblc\" (UniqueName: \"kubernetes.io/projected/abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa-kube-api-access-ksblc\") pod \"horizon-operator-controller-manager-598f69df5d-js2kh\" (UID: \"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.952549 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5755c"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.953991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.954927 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.955992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.958074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkmgk" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.958688 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4vzzs" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.958802 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5755c"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.963189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.983857 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.984876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.986005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jlld\" (UniqueName: \"kubernetes.io/projected/18a8d35c-05ad-4057-b88e-e5f0d417678f-kube-api-access-9jlld\") pod \"barbican-operator-controller-manager-75fb479bcc-5jtfh\" (UID: \"18a8d35c-05ad-4057-b88e-e5f0d417678f\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.986513 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fbxqr" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.989128 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b"] Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.990440 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbppn\" (UniqueName: \"kubernetes.io/projected/9762a11c-fd59-489a-9a95-7725f4d1c9e4-kube-api-access-rbppn\") pod \"cinder-operator-controller-manager-6498cbf48f-5hn9n\" (UID: \"9762a11c-fd59-489a-9a95-7725f4d1c9e4\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.990547 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:08:46 crc kubenswrapper[4751]: I1123 04:08:46.994839 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9fn\" (UniqueName: \"kubernetes.io/projected/5323f8b0-18ba-42eb-9a73-ee25c2592aea-kube-api-access-bj9fn\") pod \"designate-operator-controller-manager-767ccfd65f-vrm2h\" (UID: \"5323f8b0-18ba-42eb-9a73-ee25c2592aea\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.000845 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xk9wr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.001411 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.009410 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.031231 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.035294 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jrw\" (UniqueName: \"kubernetes.io/projected/934b92b0-4c8a-48d8-8514-ffe3d566b58d-kube-api-access-q8jrw\") pod \"glance-operator-controller-manager-7969689c84-hlrq4\" (UID: \"934b92b0-4c8a-48d8-8514-ffe3d566b58d\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv98p\" (UniqueName: \"kubernetes.io/projected/3ce24a56-0cc8-4f63-91da-dde87342529b-kube-api-access-qv98p\") pod \"heat-operator-controller-manager-56f54d6746-wh2rd\" (UID: \"3ce24a56-0cc8-4f63-91da-dde87342529b\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039594 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2nf5\" (UniqueName: \"kubernetes.io/projected/f65918da-cd67-4138-bcb4-1316d398b30e-kube-api-access-v2nf5\") pod \"mariadb-operator-controller-manager-54b5986bb8-rr4vk\" (UID: \"f65918da-cd67-4138-bcb4-1316d398b30e\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrfw\" (UniqueName: \"kubernetes.io/projected/d877cc36-10d3-4ba0-8140-ad4f89a2b855-kube-api-access-lhrfw\") pod \"manila-operator-controller-manager-58f887965d-5755c\" (UID: \"d877cc36-10d3-4ba0-8140-ad4f89a2b855\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039652 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skc8c\" (UniqueName: \"kubernetes.io/projected/59fa9d8a-cc64-478a-be71-fda41132aca9-kube-api-access-skc8c\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039680 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd6bq\" (UniqueName: \"kubernetes.io/projected/3fe1e718-5530-4899-9a28-fbaa27ed08f4-kube-api-access-dd6bq\") pod \"nova-operator-controller-manager-cfbb9c588-nw29b\" (UID: \"3fe1e718-5530-4899-9a28-fbaa27ed08f4\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4mp\" (UniqueName: \"kubernetes.io/projected/8e942272-3be5-4bec-b764-ab18709fbb4d-kube-api-access-2c4mp\") pod \"ironic-operator-controller-manager-99b499f4-7pqmh\" (UID: \"8e942272-3be5-4bec-b764-ab18709fbb4d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039741 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csssx\" (UniqueName: \"kubernetes.io/projected/09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb-kube-api-access-csssx\") pod \"keystone-operator-controller-manager-7454b96578-h6524\" (UID: \"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfsb\" (UniqueName: \"kubernetes.io/projected/c1d9b3d4-a044-46e5-be2c-463d728a4c5d-kube-api-access-ldfsb\") pod \"neutron-operator-controller-manager-78bd47f458-6dwlq\" (UID: \"c1d9b3d4-a044-46e5-be2c-463d728a4c5d\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039791 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.039815 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksblc\" (UniqueName: \"kubernetes.io/projected/abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa-kube-api-access-ksblc\") pod \"horizon-operator-controller-manager-598f69df5d-js2kh\" (UID: \"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.043794 4751 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.043851 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert podName:59fa9d8a-cc64-478a-be71-fda41132aca9 nodeName:}" failed. No retries permitted until 2025-11-23 04:08:47.543835698 +0000 UTC m=+823.737507057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert") pod "infra-operator-controller-manager-6dd8864d7c-447nt" (UID: "59fa9d8a-cc64-478a-be71-fda41132aca9") : secret "infra-operator-webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.049939 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.088650 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.090623 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-h5rzg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.096165 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.105360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jrw\" (UniqueName: \"kubernetes.io/projected/934b92b0-4c8a-48d8-8514-ffe3d566b58d-kube-api-access-q8jrw\") pod \"glance-operator-controller-manager-7969689c84-hlrq4\" (UID: \"934b92b0-4c8a-48d8-8514-ffe3d566b58d\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.107407 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.107435 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.108234 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.108518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.109535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skc8c\" (UniqueName: \"kubernetes.io/projected/59fa9d8a-cc64-478a-be71-fda41132aca9-kube-api-access-skc8c\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.114868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv98p\" (UniqueName: \"kubernetes.io/projected/3ce24a56-0cc8-4f63-91da-dde87342529b-kube-api-access-qv98p\") pod \"heat-operator-controller-manager-56f54d6746-wh2rd\" (UID: \"3ce24a56-0cc8-4f63-91da-dde87342529b\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.115033 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.127771 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.128454 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6hcz6" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.128653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cmtr5" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.129430 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.131662 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.140614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksblc\" (UniqueName: \"kubernetes.io/projected/abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa-kube-api-access-ksblc\") pod \"horizon-operator-controller-manager-598f69df5d-js2kh\" (UID: \"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.141104 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csssx\" (UniqueName: \"kubernetes.io/projected/09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb-kube-api-access-csssx\") pod \"keystone-operator-controller-manager-7454b96578-h6524\" (UID: \"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg685\" (UniqueName: \"kubernetes.io/projected/7f713385-4fd0-462a-8812-ae2cc7ad910b-kube-api-access-pg685\") pod \"octavia-operator-controller-manager-54cfbf4c7d-zstqk\" (UID: \"7f713385-4fd0-462a-8812-ae2cc7ad910b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142670 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfsb\" (UniqueName: \"kubernetes.io/projected/c1d9b3d4-a044-46e5-be2c-463d728a4c5d-kube-api-access-ldfsb\") pod \"neutron-operator-controller-manager-78bd47f458-6dwlq\" (UID: \"c1d9b3d4-a044-46e5-be2c-463d728a4c5d\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2nf5\" (UniqueName: \"kubernetes.io/projected/f65918da-cd67-4138-bcb4-1316d398b30e-kube-api-access-v2nf5\") pod \"mariadb-operator-controller-manager-54b5986bb8-rr4vk\" (UID: \"f65918da-cd67-4138-bcb4-1316d398b30e\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142768 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrfw\" (UniqueName: \"kubernetes.io/projected/d877cc36-10d3-4ba0-8140-ad4f89a2b855-kube-api-access-lhrfw\") pod \"manila-operator-controller-manager-58f887965d-5755c\" (UID: \"d877cc36-10d3-4ba0-8140-ad4f89a2b855\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.142793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd6bq\" (UniqueName: \"kubernetes.io/projected/3fe1e718-5530-4899-9a28-fbaa27ed08f4-kube-api-access-dd6bq\") pod \"nova-operator-controller-manager-cfbb9c588-nw29b\" (UID: \"3fe1e718-5530-4899-9a28-fbaa27ed08f4\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.144571 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4mp\" (UniqueName: \"kubernetes.io/projected/8e942272-3be5-4bec-b764-ab18709fbb4d-kube-api-access-2c4mp\") pod \"ironic-operator-controller-manager-99b499f4-7pqmh\" (UID: \"8e942272-3be5-4bec-b764-ab18709fbb4d\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.150518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.166048 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.167722 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7mf27" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.172452 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-q87hr"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.173380 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csssx\" (UniqueName: \"kubernetes.io/projected/09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb-kube-api-access-csssx\") pod \"keystone-operator-controller-manager-7454b96578-h6524\" (UID: \"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.177200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd6bq\" (UniqueName: \"kubernetes.io/projected/3fe1e718-5530-4899-9a28-fbaa27ed08f4-kube-api-access-dd6bq\") pod \"nova-operator-controller-manager-cfbb9c588-nw29b\" (UID: \"3fe1e718-5530-4899-9a28-fbaa27ed08f4\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.179816 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.179985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.181530 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrfw\" (UniqueName: \"kubernetes.io/projected/d877cc36-10d3-4ba0-8140-ad4f89a2b855-kube-api-access-lhrfw\") pod \"manila-operator-controller-manager-58f887965d-5755c\" (UID: \"d877cc36-10d3-4ba0-8140-ad4f89a2b855\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.182024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2nf5\" (UniqueName: \"kubernetes.io/projected/f65918da-cd67-4138-bcb4-1316d398b30e-kube-api-access-v2nf5\") pod \"mariadb-operator-controller-manager-54b5986bb8-rr4vk\" (UID: \"f65918da-cd67-4138-bcb4-1316d398b30e\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.193564 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cm78t" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.199422 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfsb\" (UniqueName: \"kubernetes.io/projected/c1d9b3d4-a044-46e5-be2c-463d728a4c5d-kube-api-access-ldfsb\") pod \"neutron-operator-controller-manager-78bd47f458-6dwlq\" (UID: \"c1d9b3d4-a044-46e5-be2c-463d728a4c5d\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.200149 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.200521 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-q87hr"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.201320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.236934 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.237990 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55c89\" (UniqueName: \"kubernetes.io/projected/5176792c-6b3a-46cc-9ddb-5416391ce264-kube-api-access-55c89\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244238 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55nh\" (UniqueName: \"kubernetes.io/projected/4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d-kube-api-access-h55nh\") pod \"placement-operator-controller-manager-5b797b8dff-58plx\" (UID: \"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnbz\" (UniqueName: \"kubernetes.io/projected/d96aa695-fa54-4828-b37d-9c4e5121344a-kube-api-access-fmnbz\") pod \"ovn-operator-controller-manager-54fc5f65b7-scc77\" (UID: \"d96aa695-fa54-4828-b37d-9c4e5121344a\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.244310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg685\" (UniqueName: \"kubernetes.io/projected/7f713385-4fd0-462a-8812-ae2cc7ad910b-kube-api-access-pg685\") pod \"octavia-operator-controller-manager-54cfbf4c7d-zstqk\" (UID: \"7f713385-4fd0-462a-8812-ae2cc7ad910b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.247500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcw9v\" (UniqueName: \"kubernetes.io/projected/b49833ae-797b-42aa-aa69-4ddc939dcad6-kube-api-access-gcw9v\") pod \"swift-operator-controller-manager-d656998f4-q87hr\" (UID: \"b49833ae-797b-42aa-aa69-4ddc939dcad6\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.250553 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k4szk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.253442 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.274926 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.277949 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerStarted","Data":"cefbab68c34ad54b4f025657ee4215517497c5982ae41a6e7f34d145c55fdade"} Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.292101 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.293197 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.296559 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4nhr6" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.306529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg685\" (UniqueName: \"kubernetes.io/projected/7f713385-4fd0-462a-8812-ae2cc7ad910b-kube-api-access-pg685\") pod \"octavia-operator-controller-manager-54cfbf4c7d-zstqk\" (UID: \"7f713385-4fd0-462a-8812-ae2cc7ad910b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.331640 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.362381 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55c89\" (UniqueName: \"kubernetes.io/projected/5176792c-6b3a-46cc-9ddb-5416391ce264-kube-api-access-55c89\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363255 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwdj\" (UniqueName: \"kubernetes.io/projected/763c6ab4-8128-46d4-9c53-45c1b2cd7ecc-kube-api-access-lxwdj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8ldr4\" (UID: \"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363331 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55nh\" (UniqueName: \"kubernetes.io/projected/4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d-kube-api-access-h55nh\") pod \"placement-operator-controller-manager-5b797b8dff-58plx\" (UID: \"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363379 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnbz\" (UniqueName: \"kubernetes.io/projected/d96aa695-fa54-4828-b37d-9c4e5121344a-kube-api-access-fmnbz\") pod \"ovn-operator-controller-manager-54fc5f65b7-scc77\" (UID: \"d96aa695-fa54-4828-b37d-9c4e5121344a\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclkw\" (UniqueName: \"kubernetes.io/projected/79f85f0e-fcac-4778-8dac-0a2953ba9c8d-kube-api-access-lclkw\") pod \"test-operator-controller-manager-b4c496f69-7tkw8\" (UID: \"79f85f0e-fcac-4778-8dac-0a2953ba9c8d\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.363566 4751 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.363632 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert podName:5176792c-6b3a-46cc-9ddb-5416391ce264 nodeName:}" failed. No retries permitted until 2025-11-23 04:08:47.863616495 +0000 UTC m=+824.057287844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" (UID: "5176792c-6b3a-46cc-9ddb-5416391ce264") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.363957 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcw9v\" (UniqueName: \"kubernetes.io/projected/b49833ae-797b-42aa-aa69-4ddc939dcad6-kube-api-access-gcw9v\") pod \"swift-operator-controller-manager-d656998f4-q87hr\" (UID: \"b49833ae-797b-42aa-aa69-4ddc939dcad6\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.394273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55c89\" (UniqueName: \"kubernetes.io/projected/5176792c-6b3a-46cc-9ddb-5416391ce264-kube-api-access-55c89\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.406955 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnbz\" (UniqueName: \"kubernetes.io/projected/d96aa695-fa54-4828-b37d-9c4e5121344a-kube-api-access-fmnbz\") pod \"ovn-operator-controller-manager-54fc5f65b7-scc77\" (UID: \"d96aa695-fa54-4828-b37d-9c4e5121344a\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.410811 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.414083 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.416037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.424450 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.430749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zc47h" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.431171 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcw9v\" (UniqueName: \"kubernetes.io/projected/b49833ae-797b-42aa-aa69-4ddc939dcad6-kube-api-access-gcw9v\") pod \"swift-operator-controller-manager-d656998f4-q87hr\" (UID: \"b49833ae-797b-42aa-aa69-4ddc939dcad6\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.432929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55nh\" (UniqueName: \"kubernetes.io/projected/4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d-kube-api-access-h55nh\") pod \"placement-operator-controller-manager-5b797b8dff-58plx\" (UID: \"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.437171 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.443528 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-whr9b" podStartSLOduration=2.049960576 podStartE2EDuration="4.443514433s" podCreationTimestamp="2025-11-23 04:08:43 +0000 UTC" firstStartedPulling="2025-11-23 04:08:44.231273852 +0000 UTC m=+820.424945211" lastFinishedPulling="2025-11-23 04:08:46.624827709 +0000 UTC m=+822.818499068" observedRunningTime="2025-11-23 04:08:47.438354308 +0000 UTC m=+823.632025667" watchObservedRunningTime="2025-11-23 04:08:47.443514433 +0000 UTC m=+823.637185792" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.449606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.470039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lclkw\" (UniqueName: \"kubernetes.io/projected/79f85f0e-fcac-4778-8dac-0a2953ba9c8d-kube-api-access-lclkw\") pod \"test-operator-controller-manager-b4c496f69-7tkw8\" (UID: \"79f85f0e-fcac-4778-8dac-0a2953ba9c8d\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.470421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwdj\" (UniqueName: \"kubernetes.io/projected/763c6ab4-8128-46d4-9c53-45c1b2cd7ecc-kube-api-access-lxwdj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8ldr4\" (UID: \"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.492503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwdj\" (UniqueName: \"kubernetes.io/projected/763c6ab4-8128-46d4-9c53-45c1b2cd7ecc-kube-api-access-lxwdj\") pod \"telemetry-operator-controller-manager-6d4bf84b58-8ldr4\" (UID: \"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.492762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.526989 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lclkw\" (UniqueName: \"kubernetes.io/projected/79f85f0e-fcac-4778-8dac-0a2953ba9c8d-kube-api-access-lclkw\") pod \"test-operator-controller-manager-b4c496f69-7tkw8\" (UID: \"79f85f0e-fcac-4778-8dac-0a2953ba9c8d\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.537554 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.576006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.576133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsh7q\" (UniqueName: \"kubernetes.io/projected/fd3d207d-1aae-49de-984e-ca3ebf42f864-kube-api-access-qsh7q\") pod \"watcher-operator-controller-manager-8c6448b9f-p76zg\" (UID: \"fd3d207d-1aae-49de-984e-ca3ebf42f864\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.583520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.584404 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59fa9d8a-cc64-478a-be71-fda41132aca9-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-447nt\" (UID: \"59fa9d8a-cc64-478a-be71-fda41132aca9\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.587490 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.588626 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.594713 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.595306 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.595490 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w74cw" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.600151 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.619677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.652797 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.654204 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.669410 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pskm6" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.675984 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd"] Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.713255 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.714006 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mqc\" (UniqueName: \"kubernetes.io/projected/96b8f5ab-6091-4283-b8ff-76a80465d4a0-kube-api-access-d9mqc\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.714153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsh7q\" (UniqueName: \"kubernetes.io/projected/fd3d207d-1aae-49de-984e-ca3ebf42f864-kube-api-access-qsh7q\") pod \"watcher-operator-controller-manager-8c6448b9f-p76zg\" (UID: \"fd3d207d-1aae-49de-984e-ca3ebf42f864\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.714232 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.745918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsh7q\" (UniqueName: \"kubernetes.io/projected/fd3d207d-1aae-49de-984e-ca3ebf42f864-kube-api-access-qsh7q\") pod \"watcher-operator-controller-manager-8c6448b9f-p76zg\" (UID: \"fd3d207d-1aae-49de-984e-ca3ebf42f864\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.821307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mqc\" (UniqueName: \"kubernetes.io/projected/96b8f5ab-6091-4283-b8ff-76a80465d4a0-kube-api-access-d9mqc\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.821451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znsv8\" (UniqueName: \"kubernetes.io/projected/0d5ce886-fa22-4fc1-a369-0311b8a22353-kube-api-access-znsv8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd\" (UID: \"0d5ce886-fa22-4fc1-a369-0311b8a22353\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.821508 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.821987 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: E1123 04:08:47.822053 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert podName:96b8f5ab-6091-4283-b8ff-76a80465d4a0 nodeName:}" failed. No retries permitted until 2025-11-23 04:08:48.322034454 +0000 UTC m=+824.515705803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert") pod "openstack-operator-controller-manager-5849b9999b-qsxxk" (UID: "96b8f5ab-6091-4283-b8ff-76a80465d4a0") : secret "webhook-server-cert" not found Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.822244 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.846492 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mqc\" (UniqueName: \"kubernetes.io/projected/96b8f5ab-6091-4283-b8ff-76a80465d4a0-kube-api-access-d9mqc\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.906006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.923739 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.923824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znsv8\" (UniqueName: \"kubernetes.io/projected/0d5ce886-fa22-4fc1-a369-0311b8a22353-kube-api-access-znsv8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd\" (UID: \"0d5ce886-fa22-4fc1-a369-0311b8a22353\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.928114 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5176792c-6b3a-46cc-9ddb-5416391ce264-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq\" (UID: \"5176792c-6b3a-46cc-9ddb-5416391ce264\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:47 crc kubenswrapper[4751]: I1123 04:08:47.962123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znsv8\" (UniqueName: \"kubernetes.io/projected/0d5ce886-fa22-4fc1-a369-0311b8a22353-kube-api-access-znsv8\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd\" (UID: \"0d5ce886-fa22-4fc1-a369-0311b8a22353\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.107014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.192390 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.330075 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:48 crc kubenswrapper[4751]: E1123 04:08:48.330223 4751 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 04:08:48 crc kubenswrapper[4751]: E1123 04:08:48.330268 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert podName:96b8f5ab-6091-4283-b8ff-76a80465d4a0 nodeName:}" failed. No retries permitted until 2025-11-23 04:08:49.330254225 +0000 UTC m=+825.523925584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert") pod "openstack-operator-controller-manager-5849b9999b-qsxxk" (UID: "96b8f5ab-6091-4283-b8ff-76a80465d4a0") : secret "webhook-server-cert" not found Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.484473 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.496934 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n"] Nov 23 04:08:48 crc kubenswrapper[4751]: W1123 04:08:48.501266 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9762a11c_fd59_489a_9a95_7725f4d1c9e4.slice/crio-eabe966428a993156c3ddc02540e1000ca56177fbcc5b9d51f5afac9d34adf34 WatchSource:0}: Error finding container eabe966428a993156c3ddc02540e1000ca56177fbcc5b9d51f5afac9d34adf34: Status 404 returned error can't find the container with id eabe966428a993156c3ddc02540e1000ca56177fbcc5b9d51f5afac9d34adf34 Nov 23 04:08:48 crc kubenswrapper[4751]: W1123 04:08:48.501609 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a8d35c_05ad_4057_b88e_e5f0d417678f.slice/crio-32f6725915072874e314868a3dfb80a6da2d3bb82c37b5e5c5be365a35e36d1a WatchSource:0}: Error finding container 32f6725915072874e314868a3dfb80a6da2d3bb82c37b5e5c5be365a35e36d1a: Status 404 returned error can't find the container with id 32f6725915072874e314868a3dfb80a6da2d3bb82c37b5e5c5be365a35e36d1a Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.508336 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.513313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd"] Nov 23 04:08:48 crc kubenswrapper[4751]: W1123 04:08:48.516287 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ce24a56_0cc8_4f63_91da_dde87342529b.slice/crio-c0a3c9380fe11ef9289ea954f692f61e671f5a5a70271af2b3c994a943a80a24 WatchSource:0}: Error finding container c0a3c9380fe11ef9289ea954f692f61e671f5a5a70271af2b3c994a943a80a24: Status 404 returned error can't find the container with id c0a3c9380fe11ef9289ea954f692f61e671f5a5a70271af2b3c994a943a80a24 Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.517851 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4"] Nov 23 04:08:48 crc kubenswrapper[4751]: W1123 04:08:48.524806 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934b92b0_4c8a_48d8_8514_ffe3d566b58d.slice/crio-71d8b681fe1b04a9a453f887db67601a0cb7e596f8c17127aed70f94537a2a42 WatchSource:0}: Error finding container 71d8b681fe1b04a9a453f887db67601a0cb7e596f8c17127aed70f94537a2a42: Status 404 returned error can't find the container with id 71d8b681fe1b04a9a453f887db67601a0cb7e596f8c17127aed70f94537a2a42 Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.586653 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.586706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.691247 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.818179 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-5755c"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.837756 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.854122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.872336 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-q87hr"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.883402 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq"] Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.888267 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4"] Nov 23 04:08:48 crc kubenswrapper[4751]: W1123 04:08:48.889811 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod763c6ab4_8128_46d4_9c53_45c1b2cd7ecc.slice/crio-6ae5fa054225aa400e79b06ac4d74eb951b880ef9e2fbe61ac892767dcc006a0 WatchSource:0}: Error finding container 6ae5fa054225aa400e79b06ac4d74eb951b880ef9e2fbe61ac892767dcc006a0: Status 404 returned error can't find the container with id 6ae5fa054225aa400e79b06ac4d74eb951b880ef9e2fbe61ac892767dcc006a0 Nov 23 04:08:48 crc kubenswrapper[4751]: I1123 04:08:48.893910 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77"] Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.102086 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk"] Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.110820 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg"] Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.124777 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt"] Nov 23 04:08:49 crc kubenswrapper[4751]: W1123 04:08:49.125460 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e942272_3be5_4bec_b764_ab18709fbb4d.slice/crio-b0e081ae9f67ae9c52104908404f282475ccb8481a1580b6e3b032a76b3f082a WatchSource:0}: Error finding container b0e081ae9f67ae9c52104908404f282475ccb8481a1580b6e3b032a76b3f082a: Status 404 returned error can't find the container with id b0e081ae9f67ae9c52104908404f282475ccb8481a1580b6e3b032a76b3f082a Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.138122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh"] Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.146911 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2c4mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-99b499f4-7pqmh_openstack-operators(8e942272-3be5-4bec-b764-ab18709fbb4d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.147122 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skc8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6dd8864d7c-447nt_openstack-operators(59fa9d8a-cc64-478a-be71-fda41132aca9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.149308 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq"] Nov 23 04:08:49 crc kubenswrapper[4751]: W1123 04:08:49.157319 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5176792c_6b3a_46cc_9ddb_5416391ce264.slice/crio-fb01f1c1dbe14eeccf603ff497bd591e432fe586f21fb070516c6efe082aa7c1 WatchSource:0}: Error finding container fb01f1c1dbe14eeccf603ff497bd591e432fe586f21fb070516c6efe082aa7c1: Status 404 returned error can't find the container with id fb01f1c1dbe14eeccf603ff497bd591e432fe586f21fb070516c6efe082aa7c1 Nov 23 04:08:49 crc kubenswrapper[4751]: W1123 04:08:49.158396 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f85f0e_fcac_4778_8dac_0a2953ba9c8d.slice/crio-17d51eda02b135c95a89a67f31747c4f4b3a4dfe8014f0cbc699b9b60735e7ce WatchSource:0}: Error finding container 17d51eda02b135c95a89a67f31747c4f4b3a4dfe8014f0cbc699b9b60735e7ce: Status 404 returned error can't find the container with id 17d51eda02b135c95a89a67f31747c4f4b3a4dfe8014f0cbc699b9b60735e7ce Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.158848 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx"] Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.160483 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsh7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-p76zg_openstack-operators(fd3d207d-1aae-49de-984e-ca3ebf42f864): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.160803 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55c89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq_openstack-operators(5176792c-6b3a-46cc-9ddb-5416391ce264): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.164655 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lclkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-7tkw8_openstack-operators(79f85f0e-fcac-4778-8dac-0a2953ba9c8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.174500 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8"] Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.178722 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h55nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-58plx_openstack-operators(4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.179937 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk"] Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.188936 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pg685,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-54cfbf4c7d-zstqk_openstack-operators(7f713385-4fd0-462a-8812-ae2cc7ad910b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.189045 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-csssx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7454b96578-h6524_openstack-operators(09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.205392 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h6524"] Nov 23 04:08:49 crc kubenswrapper[4751]: W1123 04:08:49.209592 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d5ce886_fa22_4fc1_a369_0311b8a22353.slice/crio-96f155dbde42a71fb007bf15907e23545621168d528716dd4fe22a76ca8a374e WatchSource:0}: Error finding container 96f155dbde42a71fb007bf15907e23545621168d528716dd4fe22a76ca8a374e: Status 404 returned error can't find the container with id 96f155dbde42a71fb007bf15907e23545621168d528716dd4fe22a76ca8a374e Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.211651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd"] Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.226618 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znsv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd_openstack-operators(0d5ce886-fa22-4fc1-a369-0311b8a22353): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.231489 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" podUID="0d5ce886-fa22-4fc1-a369-0311b8a22353" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.343127 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" podUID="8e942272-3be5-4bec-b764-ab18709fbb4d" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.345595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" event={"ID":"fd3d207d-1aae-49de-984e-ca3ebf42f864","Type":"ContainerStarted","Data":"478b0f8ac39c1b9baca1fd2afce37827bc63843ac6d5229b1b09abdae7760048"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.347740 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" event={"ID":"59fa9d8a-cc64-478a-be71-fda41132aca9","Type":"ContainerStarted","Data":"be6466cbaa9472054dafcd11b5bf400e29e308b5ba5b7af4871e0cac20dd943f"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.350182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.351786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" event={"ID":"7f713385-4fd0-462a-8812-ae2cc7ad910b","Type":"ContainerStarted","Data":"cb9d214171b8458750dad0bd89bee7bdd44ba1aadc6f5f7560e028614d28349c"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.354433 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" event={"ID":"9762a11c-fd59-489a-9a95-7725f4d1c9e4","Type":"ContainerStarted","Data":"eabe966428a993156c3ddc02540e1000ca56177fbcc5b9d51f5afac9d34adf34"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.355649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" event={"ID":"5323f8b0-18ba-42eb-9a73-ee25c2592aea","Type":"ContainerStarted","Data":"2626b0b23d20a1334805a2b72b5d8402f89375f6b6b7a81bfc76079a6d7ede0b"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.357152 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b8f5ab-6091-4283-b8ff-76a80465d4a0-cert\") pod \"openstack-operator-controller-manager-5849b9999b-qsxxk\" (UID: \"96b8f5ab-6091-4283-b8ff-76a80465d4a0\") " pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.358889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" event={"ID":"d96aa695-fa54-4828-b37d-9c4e5121344a","Type":"ContainerStarted","Data":"f2523018d0b1ed5811d4348dc851fa2c1b6459af83c20238e9bcd22c19fae348"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.360017 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" event={"ID":"f65918da-cd67-4138-bcb4-1316d398b30e","Type":"ContainerStarted","Data":"8e2c0de6884ee919fed0b8faf1a8772da4e16e0da5571b5ae12df6abbb2ab504"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.361465 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" event={"ID":"d877cc36-10d3-4ba0-8140-ad4f89a2b855","Type":"ContainerStarted","Data":"29234ed76c55dd1f8913da3f2279e4afed018b21b4a6de6c804d20583165e383"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.363300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" event={"ID":"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d","Type":"ContainerStarted","Data":"933d17c64b186543d24f2fb40dadd836f71a26a871f7939f51bb2bf3df040a1c"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.373702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" event={"ID":"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa","Type":"ContainerStarted","Data":"5b015fdffc8b0e47eadbc5db154d8d0bc23c5cf7a2c7a2a5d3fd5e94aa01bd6f"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.374781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" event={"ID":"3ce24a56-0cc8-4f63-91da-dde87342529b","Type":"ContainerStarted","Data":"c0a3c9380fe11ef9289ea954f692f61e671f5a5a70271af2b3c994a943a80a24"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.376414 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" event={"ID":"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc","Type":"ContainerStarted","Data":"6ae5fa054225aa400e79b06ac4d74eb951b880ef9e2fbe61ac892767dcc006a0"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.376688 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.380621 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" podUID="5176792c-6b3a-46cc-9ddb-5416391ce264" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.381793 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" event={"ID":"0d5ce886-fa22-4fc1-a369-0311b8a22353","Type":"ContainerStarted","Data":"96f155dbde42a71fb007bf15907e23545621168d528716dd4fe22a76ca8a374e"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.383473 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" event={"ID":"3fe1e718-5530-4899-9a28-fbaa27ed08f4","Type":"ContainerStarted","Data":"6c1a31f64a7f164fd176defcdc428c354b178b4ab616011ecc3a92d7e54786d9"} Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.385566 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" podUID="fd3d207d-1aae-49de-984e-ca3ebf42f864" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.391795 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" podUID="0d5ce886-fa22-4fc1-a369-0311b8a22353" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.392274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" event={"ID":"8e942272-3be5-4bec-b764-ab18709fbb4d","Type":"ContainerStarted","Data":"b0e081ae9f67ae9c52104908404f282475ccb8481a1580b6e3b032a76b3f082a"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.399721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" event={"ID":"b49833ae-797b-42aa-aa69-4ddc939dcad6","Type":"ContainerStarted","Data":"5da864de1ae455cfb3911ee7135c9fa0043decbd8d160d81f4797c84721e1b57"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.407962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" event={"ID":"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb","Type":"ContainerStarted","Data":"730aa3511548a0bf0986cbd857f66d9e0601bab525a6033a40d381486c61d6ad"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.412312 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" event={"ID":"934b92b0-4c8a-48d8-8514-ffe3d566b58d","Type":"ContainerStarted","Data":"71d8b681fe1b04a9a453f887db67601a0cb7e596f8c17127aed70f94537a2a42"} Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.423410 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" podUID="8e942272-3be5-4bec-b764-ab18709fbb4d" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.429252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" event={"ID":"c1d9b3d4-a044-46e5-be2c-463d728a4c5d","Type":"ContainerStarted","Data":"e2af07a6985cca845d625635cb33225d74bcb504850d5c1804c34646c0ad0cba"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.449683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" event={"ID":"79f85f0e-fcac-4778-8dac-0a2953ba9c8d","Type":"ContainerStarted","Data":"17d51eda02b135c95a89a67f31747c4f4b3a4dfe8014f0cbc699b9b60735e7ce"} Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.455226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" event={"ID":"5176792c-6b3a-46cc-9ddb-5416391ce264","Type":"ContainerStarted","Data":"fb01f1c1dbe14eeccf603ff497bd591e432fe586f21fb070516c6efe082aa7c1"} Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.457165 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" podUID="5176792c-6b3a-46cc-9ddb-5416391ce264" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.461025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" event={"ID":"18a8d35c-05ad-4057-b88e-e5f0d417678f","Type":"ContainerStarted","Data":"32f6725915072874e314868a3dfb80a6da2d3bb82c37b5e5c5be365a35e36d1a"} Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.517427 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" podUID="09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.536573 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.658037 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" podUID="59fa9d8a-cc64-478a-be71-fda41132aca9" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.669913 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" podUID="7f713385-4fd0-462a-8812-ae2cc7ad910b" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.678884 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" podUID="4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d" Nov 23 04:08:49 crc kubenswrapper[4751]: E1123 04:08:49.680805 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" podUID="79f85f0e-fcac-4778-8dac-0a2953ba9c8d" Nov 23 04:08:49 crc kubenswrapper[4751]: I1123 04:08:49.791705 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk"] Nov 23 04:08:49 crc kubenswrapper[4751]: W1123 04:08:49.805288 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b8f5ab_6091_4283_b8ff_76a80465d4a0.slice/crio-2a52848be2076ce729db77f63e5855316fee70fdbc6cf10f58b4789743853ddf WatchSource:0}: Error finding container 2a52848be2076ce729db77f63e5855316fee70fdbc6cf10f58b4789743853ddf: Status 404 returned error can't find the container with id 2a52848be2076ce729db77f63e5855316fee70fdbc6cf10f58b4789743853ddf Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.484266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" event={"ID":"59fa9d8a-cc64-478a-be71-fda41132aca9","Type":"ContainerStarted","Data":"06cf0b918630725bf617c812701c7d3572db424c4efa411e6b1a49b94a381961"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.485195 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" podUID="59fa9d8a-cc64-478a-be71-fda41132aca9" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.486184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" event={"ID":"fd3d207d-1aae-49de-984e-ca3ebf42f864","Type":"ContainerStarted","Data":"4fadf87779f7a5d15f00b8c6338faa74bf17eb4c1a83305fd7e56ae5783756ec"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.488840 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" podUID="fd3d207d-1aae-49de-984e-ca3ebf42f864" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.491486 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" event={"ID":"79f85f0e-fcac-4778-8dac-0a2953ba9c8d","Type":"ContainerStarted","Data":"48aa33968ab017b32169edc2262acc23c52b8aa796236a9852267a9fcd80be98"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.497033 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" podUID="79f85f0e-fcac-4778-8dac-0a2953ba9c8d" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.503138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" event={"ID":"5176792c-6b3a-46cc-9ddb-5416391ce264","Type":"ContainerStarted","Data":"b31120d4b8435fdaf31889c723cb6505a6bc821471504ece92fd170c3b85ca2c"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.505255 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" podUID="5176792c-6b3a-46cc-9ddb-5416391ce264" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.506969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" event={"ID":"8e942272-3be5-4bec-b764-ab18709fbb4d","Type":"ContainerStarted","Data":"2c2d93c61920d5a263b936d771500177e66d4261560a70816331a9f73aec069b"} Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.509026 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" event={"ID":"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb","Type":"ContainerStarted","Data":"65f244d157730e54c72673d9c12c04589fe668d143e9d0ffdec66b5e7cb491ab"} Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.511679 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" event={"ID":"7f713385-4fd0-462a-8812-ae2cc7ad910b","Type":"ContainerStarted","Data":"6a8ddb15b184d50e2692cd1d6948f82b13f4b15d9cc5abd87ac5990ce02bbd29"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.512662 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" podUID="8e942272-3be5-4bec-b764-ab18709fbb4d" Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.513017 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" podUID="7f713385-4fd0-462a-8812-ae2cc7ad910b" Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.513548 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" podUID="09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.550641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" event={"ID":"96b8f5ab-6091-4283-b8ff-76a80465d4a0","Type":"ContainerStarted","Data":"8cc8d0f43be26322f49b8d8047f5e00c480228408f77dbc56c5c153f63a7d8e1"} Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.550695 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" event={"ID":"96b8f5ab-6091-4283-b8ff-76a80465d4a0","Type":"ContainerStarted","Data":"2a52848be2076ce729db77f63e5855316fee70fdbc6cf10f58b4789743853ddf"} Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.551071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.578646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" event={"ID":"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d","Type":"ContainerStarted","Data":"33335671df97773d3916bcea94e5f55c66befea123dcbff5ccadae9fe531866c"} Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.580872 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" podUID="0d5ce886-fa22-4fc1-a369-0311b8a22353" Nov 23 04:08:50 crc kubenswrapper[4751]: E1123 04:08:50.580879 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" podUID="4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d" Nov 23 04:08:50 crc kubenswrapper[4751]: I1123 04:08:50.598361 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" podStartSLOduration=3.598333743 podStartE2EDuration="3.598333743s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:08:50.597368298 +0000 UTC m=+826.791039657" watchObservedRunningTime="2025-11-23 04:08:50.598333743 +0000 UTC m=+826.792005092" Nov 23 04:08:51 crc kubenswrapper[4751]: I1123 04:08:51.586651 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" event={"ID":"96b8f5ab-6091-4283-b8ff-76a80465d4a0","Type":"ContainerStarted","Data":"9a7af2f066010a271b06a24474db418d175f8bf99ee19d13cb66181e486e6fa7"} Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.589470 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" podUID="8e942272-3be5-4bec-b764-ab18709fbb4d" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.591837 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" podUID="79f85f0e-fcac-4778-8dac-0a2953ba9c8d" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.591928 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" podUID="5176792c-6b3a-46cc-9ddb-5416391ce264" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.592154 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" podUID="59fa9d8a-cc64-478a-be71-fda41132aca9" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.592406 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" podUID="fd3d207d-1aae-49de-984e-ca3ebf42f864" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.592502 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" podUID="09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.592737 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" podUID="7f713385-4fd0-462a-8812-ae2cc7ad910b" Nov 23 04:08:51 crc kubenswrapper[4751]: E1123 04:08:51.596093 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" podUID="4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d" Nov 23 04:08:52 crc kubenswrapper[4751]: I1123 04:08:52.260458 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:08:52 crc kubenswrapper[4751]: I1123 04:08:52.260851 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whsps" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="registry-server" containerID="cri-o://e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" gracePeriod=2 Nov 23 04:08:52 crc kubenswrapper[4751]: I1123 04:08:52.598117 4751 generic.go:334] "Generic (PLEG): container finished" podID="db35e685-b57a-44ab-af2a-03d56b81895b" containerID="e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" exitCode=0 Nov 23 04:08:52 crc kubenswrapper[4751]: I1123 04:08:52.598504 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerDied","Data":"e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6"} Nov 23 04:08:53 crc kubenswrapper[4751]: I1123 04:08:53.600797 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:53 crc kubenswrapper[4751]: I1123 04:08:53.600930 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:53 crc kubenswrapper[4751]: I1123 04:08:53.669295 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.699076 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.872716 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.876212 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.887189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.930529 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5t4v\" (UniqueName: \"kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.930616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:54 crc kubenswrapper[4751]: I1123 04:08:54.930734 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.031756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.031837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5t4v\" (UniqueName: \"kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.031895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.032405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.032505 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.057206 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5t4v\" (UniqueName: \"kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v\") pod \"redhat-operators-g59bn\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:55 crc kubenswrapper[4751]: I1123 04:08:55.198931 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:08:57 crc kubenswrapper[4751]: I1123 04:08:57.259680 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:08:57 crc kubenswrapper[4751]: I1123 04:08:57.260267 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-whr9b" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="registry-server" containerID="cri-o://cefbab68c34ad54b4f025657ee4215517497c5982ae41a6e7f34d145c55fdade" gracePeriod=2 Nov 23 04:08:57 crc kubenswrapper[4751]: I1123 04:08:57.647670 4751 generic.go:334] "Generic (PLEG): container finished" podID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerID="cefbab68c34ad54b4f025657ee4215517497c5982ae41a6e7f34d145c55fdade" exitCode=0 Nov 23 04:08:57 crc kubenswrapper[4751]: I1123 04:08:57.647754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerDied","Data":"cefbab68c34ad54b4f025657ee4215517497c5982ae41a6e7f34d145c55fdade"} Nov 23 04:08:58 crc kubenswrapper[4751]: E1123 04:08:58.586942 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6 is running failed: container process not found" containerID="e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 04:08:58 crc kubenswrapper[4751]: E1123 04:08:58.587501 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6 is running failed: container process not found" containerID="e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 04:08:58 crc kubenswrapper[4751]: E1123 04:08:58.587898 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6 is running failed: container process not found" containerID="e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 04:08:58 crc kubenswrapper[4751]: E1123 04:08:58.588014 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-whsps" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="registry-server" Nov 23 04:08:59 crc kubenswrapper[4751]: I1123 04:08:59.385687 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5849b9999b-qsxxk" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.067752 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.068251 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fmnbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-scc77_openstack-operators(d96aa695-fa54-4828-b37d-9c4e5121344a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.429252 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.429447 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldfsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78bd47f458-6dwlq_openstack-operators(c1d9b3d4-a044-46e5-be2c-463d728a4c5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.523645 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.628707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content\") pod \"db35e685-b57a-44ab-af2a-03d56b81895b\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.629012 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities\") pod \"db35e685-b57a-44ab-af2a-03d56b81895b\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.629064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shdq6\" (UniqueName: \"kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6\") pod \"db35e685-b57a-44ab-af2a-03d56b81895b\" (UID: \"db35e685-b57a-44ab-af2a-03d56b81895b\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.632039 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities" (OuterVolumeSpecName: "utilities") pod "db35e685-b57a-44ab-af2a-03d56b81895b" (UID: "db35e685-b57a-44ab-af2a-03d56b81895b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.635749 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6" (OuterVolumeSpecName: "kube-api-access-shdq6") pod "db35e685-b57a-44ab-af2a-03d56b81895b" (UID: "db35e685-b57a-44ab-af2a-03d56b81895b"). InnerVolumeSpecName "kube-api-access-shdq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.681994 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whsps" event={"ID":"db35e685-b57a-44ab-af2a-03d56b81895b","Type":"ContainerDied","Data":"6eb82cedbff23d028696ee3bccef90613b579207304ec375806b255c6989d0e6"} Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.682035 4751 scope.go:117] "RemoveContainer" containerID="e346323b35193d3c3d3a39c127ce7d03d97f06bbf1ee6427b096b1bb13bf7ae6" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.682143 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whsps" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.693469 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db35e685-b57a-44ab-af2a-03d56b81895b" (UID: "db35e685-b57a-44ab-af2a-03d56b81895b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.717110 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.717219 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" podUID="c1d9b3d4-a044-46e5-be2c-463d728a4c5d" Nov 23 04:09:01 crc kubenswrapper[4751]: E1123 04:09:01.732395 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" podUID="d96aa695-fa54-4828-b37d-9c4e5121344a" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.732965 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shdq6\" (UniqueName: \"kubernetes.io/projected/db35e685-b57a-44ab-af2a-03d56b81895b-kube-api-access-shdq6\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.732984 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.732995 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db35e685-b57a-44ab-af2a-03d56b81895b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.783814 4751 scope.go:117] "RemoveContainer" containerID="171c6702b95e9de0fae80e4cdbe6c3d6d365135fd6ba08e6150845cbe8648218" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.833915 4751 scope.go:117] "RemoveContainer" containerID="ded27334350433c097c96b7bdd0373759721519f104b84d74c2bed50ecb18eb0" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.834276 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content\") pod \"efa09fe3-1c6b-439b-b717-e76800f469f7\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.834309 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities\") pod \"efa09fe3-1c6b-439b-b717-e76800f469f7\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.834424 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv9m2\" (UniqueName: \"kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2\") pod \"efa09fe3-1c6b-439b-b717-e76800f469f7\" (UID: \"efa09fe3-1c6b-439b-b717-e76800f469f7\") " Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.836130 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities" (OuterVolumeSpecName: "utilities") pod "efa09fe3-1c6b-439b-b717-e76800f469f7" (UID: "efa09fe3-1c6b-439b-b717-e76800f469f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.850837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2" (OuterVolumeSpecName: "kube-api-access-pv9m2") pod "efa09fe3-1c6b-439b-b717-e76800f469f7" (UID: "efa09fe3-1c6b-439b-b717-e76800f469f7"). InnerVolumeSpecName "kube-api-access-pv9m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.859422 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa09fe3-1c6b-439b-b717-e76800f469f7" (UID: "efa09fe3-1c6b-439b-b717-e76800f469f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.935387 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.935417 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa09fe3-1c6b-439b-b717-e76800f469f7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.935429 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv9m2\" (UniqueName: \"kubernetes.io/projected/efa09fe3-1c6b-439b-b717-e76800f469f7-kube-api-access-pv9m2\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:01 crc kubenswrapper[4751]: I1123 04:09:01.955871 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.039491 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.046312 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whsps"] Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.653929 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" path="/var/lib/kubelet/pods/db35e685-b57a-44ab-af2a-03d56b81895b/volumes" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.711400 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" event={"ID":"b49833ae-797b-42aa-aa69-4ddc939dcad6","Type":"ContainerStarted","Data":"40416f7d336a6b52f33388630547fd8d23e299d9ded65182bf5e9cb3cb60518f"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.711441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" event={"ID":"b49833ae-797b-42aa-aa69-4ddc939dcad6","Type":"ContainerStarted","Data":"ffcb8042ae931f0e6f77ebcca001a61d77f3cc5b09c473809f82c451808bf959"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.711477 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.718684 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" event={"ID":"18a8d35c-05ad-4057-b88e-e5f0d417678f","Type":"ContainerStarted","Data":"f6c43cbd548ddbaa4ab73c13f85f8fc6d875eac5bfd1bb0fa770e813a5f3917e"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.726753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" event={"ID":"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa","Type":"ContainerStarted","Data":"a94d98aa5c010f3d85e507bcbabca37fdfcf0da878c6cc1aeab7ff6ca9f01af2"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.726787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" event={"ID":"abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa","Type":"ContainerStarted","Data":"8e33502073370d0751a489d6e70e435d04e75dce159cc04c016d8cea60430ba7"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.727607 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.727656 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" podStartSLOduration=3.13436187 podStartE2EDuration="15.727640661s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.887153778 +0000 UTC m=+825.080825137" lastFinishedPulling="2025-11-23 04:09:01.480432549 +0000 UTC m=+837.674103928" observedRunningTime="2025-11-23 04:09:02.725927176 +0000 UTC m=+838.919598535" watchObservedRunningTime="2025-11-23 04:09:02.727640661 +0000 UTC m=+838.921312020" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.729881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" event={"ID":"3ce24a56-0cc8-4f63-91da-dde87342529b","Type":"ContainerStarted","Data":"e820272db7a60a69de3d8f6fe1ab2c8ea56510bd3f9de5824d1dfc1798ef7bb0"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.736009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" event={"ID":"3fe1e718-5530-4899-9a28-fbaa27ed08f4","Type":"ContainerStarted","Data":"1f5bba4da0323b0c78c98988a6563b6df3e8f2d68d1321c4070ab3e9d3b1e8d8"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.736043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" event={"ID":"3fe1e718-5530-4899-9a28-fbaa27ed08f4","Type":"ContainerStarted","Data":"5deeaf5360eb97dde45613024a675b4771f241f9f9a93856e505e76fb6cdcf88"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.736687 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.745852 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" podStartSLOduration=4.065924021 podStartE2EDuration="16.745837216s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.871673793 +0000 UTC m=+825.065345152" lastFinishedPulling="2025-11-23 04:09:01.551586978 +0000 UTC m=+837.745258347" observedRunningTime="2025-11-23 04:09:02.741315608 +0000 UTC m=+838.934986967" watchObservedRunningTime="2025-11-23 04:09:02.745837216 +0000 UTC m=+838.939508565" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.751074 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-whr9b" event={"ID":"efa09fe3-1c6b-439b-b717-e76800f469f7","Type":"ContainerDied","Data":"4bd8bbe5a1a1f4b802e2285a21d397d1d9c30bd5ba8b38c0341ee01c1a60fec9"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.751120 4751 scope.go:117] "RemoveContainer" containerID="cefbab68c34ad54b4f025657ee4215517497c5982ae41a6e7f34d145c55fdade" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.751257 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-whr9b" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.753934 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" event={"ID":"c1d9b3d4-a044-46e5-be2c-463d728a4c5d","Type":"ContainerStarted","Data":"81b05ae2a63b8f414946b8d86a9b403f6114b43b2314ab0d067c2ac2548d995b"} Nov 23 04:09:02 crc kubenswrapper[4751]: E1123 04:09:02.769077 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" podUID="c1d9b3d4-a044-46e5-be2c-463d728a4c5d" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.770092 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" event={"ID":"d96aa695-fa54-4828-b37d-9c4e5121344a","Type":"ContainerStarted","Data":"31122fac381741c22a777fafae153a686971e1cca0ca43a58c28ff92f18a7380"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.770815 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" podStartSLOduration=4.178355599 podStartE2EDuration="16.770798838s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.875712419 +0000 UTC m=+825.069383778" lastFinishedPulling="2025-11-23 04:09:01.468155658 +0000 UTC m=+837.661827017" observedRunningTime="2025-11-23 04:09:02.769382871 +0000 UTC m=+838.963054230" watchObservedRunningTime="2025-11-23 04:09:02.770798838 +0000 UTC m=+838.964470197" Nov 23 04:09:02 crc kubenswrapper[4751]: E1123 04:09:02.819539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" podUID="d96aa695-fa54-4828-b37d-9c4e5121344a" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.819688 4751 scope.go:117] "RemoveContainer" containerID="08134b0bdb68728b75bb6f94381831415870bed25e17b428891001c73356ffdd" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.820988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" event={"ID":"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc","Type":"ContainerStarted","Data":"438bd741f2a361f24f6b3084dba80c050ee81173d9a10659aadd16ad6f19ea2e"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.821020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" event={"ID":"763c6ab4-8128-46d4-9c53-45c1b2cd7ecc","Type":"ContainerStarted","Data":"5fa28a2d22d774590872111ab6be65d3634698d28198524f4e18873d94cbdbaa"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.821649 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.833166 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" event={"ID":"f65918da-cd67-4138-bcb4-1316d398b30e","Type":"ContainerStarted","Data":"acd7e94a0143c92b0b7ef81fc6830b35dfa81412ab3d743cfb521fb74aaf651b"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.834098 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.837119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" event={"ID":"5323f8b0-18ba-42eb-9a73-ee25c2592aea","Type":"ContainerStarted","Data":"4a604bb7f6aa794462686d5a71b5cdf00f82ac45e5d3fd42dacf07c7a62a28bf"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.837156 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" event={"ID":"5323f8b0-18ba-42eb-9a73-ee25c2592aea","Type":"ContainerStarted","Data":"55b3e8877d1ec5edcff970cbad5b961eab75d38cd89e81d48c7be579a86d8f3c"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.837764 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.847300 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.854562 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerStarted","Data":"9331f1c325989beab48f810bb63d8db2a000abb5036380daa502f3d1b75e29d1"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.854619 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerStarted","Data":"9d5f1f8252b04f5e5a9ebff0aa9bfb8bf91bbd1d68fe48f97b456620673d603c"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.858013 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-whr9b"] Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.873559 4751 scope.go:117] "RemoveContainer" containerID="73dc81ae0eb4487583c1e69dcf6ece3d7a380df3601947fc47be432e8cf845af" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.882724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" event={"ID":"934b92b0-4c8a-48d8-8514-ffe3d566b58d","Type":"ContainerStarted","Data":"8e1838069191909491d2f2ae810dfe4f52f55864203bcadfa7a411b4573301f9"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.882885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" event={"ID":"934b92b0-4c8a-48d8-8514-ffe3d566b58d","Type":"ContainerStarted","Data":"3566d991e1a5604f4191d61966cb75d53e73dd7ccd95cb56b8e4f8ac4869a272"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.883604 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.899433 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" podStartSLOduration=3.226187428 podStartE2EDuration="15.899418409s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.898511624 +0000 UTC m=+825.092182983" lastFinishedPulling="2025-11-23 04:09:01.571742605 +0000 UTC m=+837.765413964" observedRunningTime="2025-11-23 04:09:02.873853901 +0000 UTC m=+839.067525260" watchObservedRunningTime="2025-11-23 04:09:02.899418409 +0000 UTC m=+839.093089768" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.918009 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" podStartSLOduration=3.933178442 podStartE2EDuration="16.917993895s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.496500849 +0000 UTC m=+824.690172208" lastFinishedPulling="2025-11-23 04:09:01.481316282 +0000 UTC m=+837.674987661" observedRunningTime="2025-11-23 04:09:02.899387329 +0000 UTC m=+839.093058688" watchObservedRunningTime="2025-11-23 04:09:02.917993895 +0000 UTC m=+839.111665254" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.920184 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" podStartSLOduration=4.547167327 podStartE2EDuration="16.920175382s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.146237658 +0000 UTC m=+825.339909017" lastFinishedPulling="2025-11-23 04:09:01.519245693 +0000 UTC m=+837.712917072" observedRunningTime="2025-11-23 04:09:02.917495922 +0000 UTC m=+839.111167281" watchObservedRunningTime="2025-11-23 04:09:02.920175382 +0000 UTC m=+839.113846741" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.920663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" event={"ID":"9762a11c-fd59-489a-9a95-7725f4d1c9e4","Type":"ContainerStarted","Data":"54dd329179753574a2f22ec93889fe9c6eac411ae2ba39f9844c19415add08f5"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.920707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" event={"ID":"9762a11c-fd59-489a-9a95-7725f4d1c9e4","Type":"ContainerStarted","Data":"526d948f830e6702189dda3ee9ee336c8448a3e8a2f2a69622befe0320cbffa3"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.920835 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.924731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" event={"ID":"d877cc36-10d3-4ba0-8140-ad4f89a2b855","Type":"ContainerStarted","Data":"cd5c4972156521d1bd2280d4143d85649046ba3cad77af7c14cf3c0f51b4712b"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.924769 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" event={"ID":"d877cc36-10d3-4ba0-8140-ad4f89a2b855","Type":"ContainerStarted","Data":"b372ae6d8d49fbb69f4ae6c2aefe6d575404cd0f4512c54133683e1d7bda1e10"} Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.924874 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.945209 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" podStartSLOduration=4.005255215 podStartE2EDuration="16.945189925s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.528177327 +0000 UTC m=+824.721848686" lastFinishedPulling="2025-11-23 04:09:01.468112027 +0000 UTC m=+837.661783396" observedRunningTime="2025-11-23 04:09:02.944820026 +0000 UTC m=+839.138491385" watchObservedRunningTime="2025-11-23 04:09:02.945189925 +0000 UTC m=+839.138861284" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.977173 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" podStartSLOduration=4.378338175 podStartE2EDuration="16.977159171s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.871507059 +0000 UTC m=+825.065178418" lastFinishedPulling="2025-11-23 04:09:01.470328055 +0000 UTC m=+837.663999414" observedRunningTime="2025-11-23 04:09:02.974540613 +0000 UTC m=+839.168211972" watchObservedRunningTime="2025-11-23 04:09:02.977159171 +0000 UTC m=+839.170830530" Nov 23 04:09:02 crc kubenswrapper[4751]: I1123 04:09:02.995045 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" podStartSLOduration=4.0319383030000004 podStartE2EDuration="16.995029938s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.506457469 +0000 UTC m=+824.700128828" lastFinishedPulling="2025-11-23 04:09:01.469549094 +0000 UTC m=+837.663220463" observedRunningTime="2025-11-23 04:09:02.994703779 +0000 UTC m=+839.188375138" watchObservedRunningTime="2025-11-23 04:09:02.995029938 +0000 UTC m=+839.188701297" Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.937571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" event={"ID":"18a8d35c-05ad-4057-b88e-e5f0d417678f","Type":"ContainerStarted","Data":"be683d89202ebc6fa597a9fedfa676f43075cb2c3ca319608b672ccbf33c4d06"} Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.938184 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.941255 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" event={"ID":"3ce24a56-0cc8-4f63-91da-dde87342529b","Type":"ContainerStarted","Data":"72517670d660a54cb59bd3ff07aa362f093fcc86aa3572b4dda0b31df76b95b2"} Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.941381 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.950506 4751 generic.go:334] "Generic (PLEG): container finished" podID="93e5a534-969f-42fe-b122-85db8bb43715" containerID="9331f1c325989beab48f810bb63d8db2a000abb5036380daa502f3d1b75e29d1" exitCode=0 Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.950582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerDied","Data":"9331f1c325989beab48f810bb63d8db2a000abb5036380daa502f3d1b75e29d1"} Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.950615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerStarted","Data":"2191b7a7258b3baa7e4ab1649903088262edd374e5bad0c1e429e8ddaf569a0c"} Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.956139 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" podStartSLOduration=4.926656312 podStartE2EDuration="17.956120592s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.504628161 +0000 UTC m=+824.698299520" lastFinishedPulling="2025-11-23 04:09:01.534092431 +0000 UTC m=+837.727763800" observedRunningTime="2025-11-23 04:09:03.954695415 +0000 UTC m=+840.148366784" watchObservedRunningTime="2025-11-23 04:09:03.956120592 +0000 UTC m=+840.149791951" Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.960292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" event={"ID":"f65918da-cd67-4138-bcb4-1316d398b30e","Type":"ContainerStarted","Data":"492294296b52c73941e3448908331024551b21563eafdc459f97834474bf86cd"} Nov 23 04:09:03 crc kubenswrapper[4751]: E1123 04:09:03.964924 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" podUID="c1d9b3d4-a044-46e5-be2c-463d728a4c5d" Nov 23 04:09:03 crc kubenswrapper[4751]: E1123 04:09:03.980750 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" podUID="d96aa695-fa54-4828-b37d-9c4e5121344a" Nov 23 04:09:03 crc kubenswrapper[4751]: I1123 04:09:03.985059 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" podStartSLOduration=5.026142463 podStartE2EDuration="17.985009207s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.52177692 +0000 UTC m=+824.715448269" lastFinishedPulling="2025-11-23 04:09:01.480643654 +0000 UTC m=+837.674315013" observedRunningTime="2025-11-23 04:09:03.976388942 +0000 UTC m=+840.170060301" watchObservedRunningTime="2025-11-23 04:09:03.985009207 +0000 UTC m=+840.178680566" Nov 23 04:09:04 crc kubenswrapper[4751]: I1123 04:09:04.660294 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" path="/var/lib/kubelet/pods/efa09fe3-1c6b-439b-b717-e76800f469f7/volumes" Nov 23 04:09:04 crc kubenswrapper[4751]: I1123 04:09:04.970828 4751 generic.go:334] "Generic (PLEG): container finished" podID="93e5a534-969f-42fe-b122-85db8bb43715" containerID="2191b7a7258b3baa7e4ab1649903088262edd374e5bad0c1e429e8ddaf569a0c" exitCode=0 Nov 23 04:09:04 crc kubenswrapper[4751]: I1123 04:09:04.970969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerDied","Data":"2191b7a7258b3baa7e4ab1649903088262edd374e5bad0c1e429e8ddaf569a0c"} Nov 23 04:09:06 crc kubenswrapper[4751]: I1123 04:09:06.989902 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" event={"ID":"0d5ce886-fa22-4fc1-a369-0311b8a22353","Type":"ContainerStarted","Data":"472921e0d61e87098645c5ce950c0ebb2911cb1d8cec00d4f3912345b5c14994"} Nov 23 04:09:06 crc kubenswrapper[4751]: I1123 04:09:06.994671 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerStarted","Data":"4fb0a1f8f02518f2287d5679ce5a2acb2831f218e119e9756e555cc2cd52f943"} Nov 23 04:09:06 crc kubenswrapper[4751]: I1123 04:09:06.998643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" event={"ID":"7f713385-4fd0-462a-8812-ae2cc7ad910b","Type":"ContainerStarted","Data":"cb6ad9cff61c42cebe72a00136e966ba5f38228f7915422d1b8b45a1d8d51107"} Nov 23 04:09:06 crc kubenswrapper[4751]: I1123 04:09:06.999266 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.015035 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd" podStartSLOduration=3.418208856 podStartE2EDuration="20.015019337s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.226458203 +0000 UTC m=+825.420129562" lastFinishedPulling="2025-11-23 04:09:05.823268684 +0000 UTC m=+842.016940043" observedRunningTime="2025-11-23 04:09:07.012194463 +0000 UTC m=+843.205865822" watchObservedRunningTime="2025-11-23 04:09:07.015019337 +0000 UTC m=+843.208690696" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.032997 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g59bn" podStartSLOduration=10.095525846 podStartE2EDuration="13.032981936s" podCreationTimestamp="2025-11-23 04:08:54 +0000 UTC" firstStartedPulling="2025-11-23 04:09:02.885117176 +0000 UTC m=+839.078788535" lastFinishedPulling="2025-11-23 04:09:05.822573266 +0000 UTC m=+842.016244625" observedRunningTime="2025-11-23 04:09:07.030598764 +0000 UTC m=+843.224270133" watchObservedRunningTime="2025-11-23 04:09:07.032981936 +0000 UTC m=+843.226653295" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.046665 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" podStartSLOduration=4.412279241 podStartE2EDuration="21.046643413s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.18880973 +0000 UTC m=+825.382481089" lastFinishedPulling="2025-11-23 04:09:05.823173902 +0000 UTC m=+842.016845261" observedRunningTime="2025-11-23 04:09:07.045800251 +0000 UTC m=+843.239471610" watchObservedRunningTime="2025-11-23 04:09:07.046643413 +0000 UTC m=+843.240314772" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.052391 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-5jtfh" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.093164 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-5hn9n" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.119217 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-vrm2h" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.135113 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-hlrq4" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.203227 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-wh2rd" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.205903 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-js2kh" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.365559 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-5755c" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.413868 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-rr4vk" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.452423 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-nw29b" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.598010 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-q87hr" Nov 23 04:09:07 crc kubenswrapper[4751]: I1123 04:09:07.624462 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-8ldr4" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.087926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" event={"ID":"09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb","Type":"ContainerStarted","Data":"fdeff404cbdd94444664762ce43e44d0aab14570d99518930b64add7fede6758"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.088858 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.090195 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" event={"ID":"59fa9d8a-cc64-478a-be71-fda41132aca9","Type":"ContainerStarted","Data":"2d6ba925514c986a80e4d0fc8a31a1f93a1668ffc7279439f6fab6d881879fb7"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.090491 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.092391 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" event={"ID":"fd3d207d-1aae-49de-984e-ca3ebf42f864","Type":"ContainerStarted","Data":"4d11d751533e84b749dc2f23d558470848c4bc1b4182f38669aa3ea1cea78a70"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.092555 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.094734 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" event={"ID":"4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d","Type":"ContainerStarted","Data":"0b87ed20c7e873d5539f550bc2448e5427cb994b55ee4490b162c47629fc9e02"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.094976 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.104893 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" event={"ID":"79f85f0e-fcac-4778-8dac-0a2953ba9c8d","Type":"ContainerStarted","Data":"2da2f1f23b9e8b3f04492952f8a18c5850335c742a370131ff8579d47d95c080"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.105206 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.108891 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" event={"ID":"5176792c-6b3a-46cc-9ddb-5416391ce264","Type":"ContainerStarted","Data":"8ad646d70376bd0d52e63995ee625d31ecb4e4a502acdc52224a57b2f556f69c"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.109146 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.113647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" event={"ID":"8e942272-3be5-4bec-b764-ab18709fbb4d","Type":"ContainerStarted","Data":"b77c42fa1b0a41c19427f6a3a2aa213a1acb173fc7f0e43e1ad896dba8d72bed"} Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.113966 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.137271 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" podStartSLOduration=3.422062759 podStartE2EDuration="27.137251597s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.164515046 +0000 UTC m=+825.358186405" lastFinishedPulling="2025-11-23 04:09:12.879703844 +0000 UTC m=+849.073375243" observedRunningTime="2025-11-23 04:09:14.133261692 +0000 UTC m=+850.326933051" watchObservedRunningTime="2025-11-23 04:09:14.137251597 +0000 UTC m=+850.330922956" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.137820 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" podStartSLOduration=4.43249469 podStartE2EDuration="28.137813992s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.188991625 +0000 UTC m=+825.382662974" lastFinishedPulling="2025-11-23 04:09:12.894310887 +0000 UTC m=+849.087982276" observedRunningTime="2025-11-23 04:09:14.111916403 +0000 UTC m=+850.305587792" watchObservedRunningTime="2025-11-23 04:09:14.137813992 +0000 UTC m=+850.331485351" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.153039 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" podStartSLOduration=3.433726905 podStartE2EDuration="27.15301463s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.160316486 +0000 UTC m=+825.353987855" lastFinishedPulling="2025-11-23 04:09:12.879604201 +0000 UTC m=+849.073275580" observedRunningTime="2025-11-23 04:09:14.151494501 +0000 UTC m=+850.345165890" watchObservedRunningTime="2025-11-23 04:09:14.15301463 +0000 UTC m=+850.346686029" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.197565 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" podStartSLOduration=4.517320287 podStartE2EDuration="28.197544738s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.160321246 +0000 UTC m=+825.353992605" lastFinishedPulling="2025-11-23 04:09:12.840545687 +0000 UTC m=+849.034217056" observedRunningTime="2025-11-23 04:09:14.193737898 +0000 UTC m=+850.387409277" watchObservedRunningTime="2025-11-23 04:09:14.197544738 +0000 UTC m=+850.391216097" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.210909 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" podStartSLOduration=3.479335269 podStartE2EDuration="27.210888898s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.178546992 +0000 UTC m=+825.372218351" lastFinishedPulling="2025-11-23 04:09:12.910100601 +0000 UTC m=+849.103771980" observedRunningTime="2025-11-23 04:09:14.210765415 +0000 UTC m=+850.404436774" watchObservedRunningTime="2025-11-23 04:09:14.210888898 +0000 UTC m=+850.404560257" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.230445 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" podStartSLOduration=4.497779616 podStartE2EDuration="28.23043s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.146952117 +0000 UTC m=+825.340623496" lastFinishedPulling="2025-11-23 04:09:12.879602521 +0000 UTC m=+849.073273880" observedRunningTime="2025-11-23 04:09:14.227930125 +0000 UTC m=+850.421601514" watchObservedRunningTime="2025-11-23 04:09:14.23043 +0000 UTC m=+850.424101359" Nov 23 04:09:14 crc kubenswrapper[4751]: I1123 04:09:14.246472 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" podStartSLOduration=5.628521464 podStartE2EDuration="28.24645183s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:49.146741801 +0000 UTC m=+825.340413160" lastFinishedPulling="2025-11-23 04:09:11.764672137 +0000 UTC m=+847.958343526" observedRunningTime="2025-11-23 04:09:14.245532696 +0000 UTC m=+850.439204045" watchObservedRunningTime="2025-11-23 04:09:14.24645183 +0000 UTC m=+850.440123189" Nov 23 04:09:15 crc kubenswrapper[4751]: I1123 04:09:15.199192 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:15 crc kubenswrapper[4751]: I1123 04:09:15.199237 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:15 crc kubenswrapper[4751]: I1123 04:09:15.275300 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:16 crc kubenswrapper[4751]: I1123 04:09:16.208523 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:16 crc kubenswrapper[4751]: I1123 04:09:16.270628 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:09:17 crc kubenswrapper[4751]: I1123 04:09:17.496722 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-zstqk" Nov 23 04:09:18 crc kubenswrapper[4751]: I1123 04:09:18.118640 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq" Nov 23 04:09:18 crc kubenswrapper[4751]: I1123 04:09:18.148024 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g59bn" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="registry-server" containerID="cri-o://4fb0a1f8f02518f2287d5679ce5a2acb2831f218e119e9756e555cc2cd52f943" gracePeriod=2 Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.161649 4751 generic.go:334] "Generic (PLEG): container finished" podID="93e5a534-969f-42fe-b122-85db8bb43715" containerID="4fb0a1f8f02518f2287d5679ce5a2acb2831f218e119e9756e555cc2cd52f943" exitCode=0 Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.162081 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerDied","Data":"4fb0a1f8f02518f2287d5679ce5a2acb2831f218e119e9756e555cc2cd52f943"} Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.585825 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.739132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5t4v\" (UniqueName: \"kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v\") pod \"93e5a534-969f-42fe-b122-85db8bb43715\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.741153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content\") pod \"93e5a534-969f-42fe-b122-85db8bb43715\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.741732 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities\") pod \"93e5a534-969f-42fe-b122-85db8bb43715\" (UID: \"93e5a534-969f-42fe-b122-85db8bb43715\") " Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.743188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities" (OuterVolumeSpecName: "utilities") pod "93e5a534-969f-42fe-b122-85db8bb43715" (UID: "93e5a534-969f-42fe-b122-85db8bb43715"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.745003 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v" (OuterVolumeSpecName: "kube-api-access-p5t4v") pod "93e5a534-969f-42fe-b122-85db8bb43715" (UID: "93e5a534-969f-42fe-b122-85db8bb43715"). InnerVolumeSpecName "kube-api-access-p5t4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.828972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93e5a534-969f-42fe-b122-85db8bb43715" (UID: "93e5a534-969f-42fe-b122-85db8bb43715"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.843738 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.843776 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5t4v\" (UniqueName: \"kubernetes.io/projected/93e5a534-969f-42fe-b122-85db8bb43715-kube-api-access-p5t4v\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:19 crc kubenswrapper[4751]: I1123 04:09:19.843788 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93e5a534-969f-42fe-b122-85db8bb43715-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.186406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g59bn" event={"ID":"93e5a534-969f-42fe-b122-85db8bb43715","Type":"ContainerDied","Data":"9d5f1f8252b04f5e5a9ebff0aa9bfb8bf91bbd1d68fe48f97b456620673d603c"} Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.186504 4751 scope.go:117] "RemoveContainer" containerID="4fb0a1f8f02518f2287d5679ce5a2acb2831f218e119e9756e555cc2cd52f943" Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.186556 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g59bn" Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.214789 4751 scope.go:117] "RemoveContainer" containerID="2191b7a7258b3baa7e4ab1649903088262edd374e5bad0c1e429e8ddaf569a0c" Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.241575 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.250751 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g59bn"] Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.263218 4751 scope.go:117] "RemoveContainer" containerID="9331f1c325989beab48f810bb63d8db2a000abb5036380daa502f3d1b75e29d1" Nov 23 04:09:20 crc kubenswrapper[4751]: I1123 04:09:20.659072 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e5a534-969f-42fe-b122-85db8bb43715" path="/var/lib/kubelet/pods/93e5a534-969f-42fe-b122-85db8bb43715/volumes" Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.207643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" event={"ID":"c1d9b3d4-a044-46e5-be2c-463d728a4c5d","Type":"ContainerStarted","Data":"3016576248f262c4912f3676a5b6c748f3be760f26907fa911bce5f00e378bd2"} Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.208619 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.211762 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" event={"ID":"d96aa695-fa54-4828-b37d-9c4e5121344a","Type":"ContainerStarted","Data":"538bd7c5da91d3e5ee238541fe84e162017fc5ebacc8d8da83b30a7b4110a1e0"} Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.212039 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.239509 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" podStartSLOduration=5.69655183 podStartE2EDuration="36.239488647s" podCreationTimestamp="2025-11-23 04:08:46 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.887763014 +0000 UTC m=+825.081434373" lastFinishedPulling="2025-11-23 04:09:19.430699811 +0000 UTC m=+855.624371190" observedRunningTime="2025-11-23 04:09:22.232139684 +0000 UTC m=+858.425811063" watchObservedRunningTime="2025-11-23 04:09:22.239488647 +0000 UTC m=+858.433160016" Nov 23 04:09:22 crc kubenswrapper[4751]: I1123 04:09:22.256364 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" podStartSLOduration=4.829626188 podStartE2EDuration="35.256329599s" podCreationTimestamp="2025-11-23 04:08:47 +0000 UTC" firstStartedPulling="2025-11-23 04:08:48.903118795 +0000 UTC m=+825.096790154" lastFinishedPulling="2025-11-23 04:09:19.329822146 +0000 UTC m=+855.523493565" observedRunningTime="2025-11-23 04:09:22.252847357 +0000 UTC m=+858.446518726" watchObservedRunningTime="2025-11-23 04:09:22.256329599 +0000 UTC m=+858.450000958" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.247686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-7pqmh" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.258405 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h6524" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.441017 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-6dwlq" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.540096 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-scc77" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.585889 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-58plx" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.716123 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-7tkw8" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.828737 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-447nt" Nov 23 04:09:27 crc kubenswrapper[4751]: I1123 04:09:27.909237 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-p76zg" Nov 23 04:09:38 crc kubenswrapper[4751]: I1123 04:09:38.115324 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:09:38 crc kubenswrapper[4751]: I1123 04:09:38.116107 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.186506 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187437 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187455 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187479 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187488 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187513 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187522 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187542 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187550 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187566 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187575 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="extract-content" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187593 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187601 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187619 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187629 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187653 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187662 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="extract-utilities" Nov 23 04:09:47 crc kubenswrapper[4751]: E1123 04:09:47.187679 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187687 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187856 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa09fe3-1c6b-439b-b717-e76800f469f7" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187881 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="db35e685-b57a-44ab-af2a-03d56b81895b" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.187901 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e5a534-969f-42fe-b122-85db8bb43715" containerName="registry-server" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.188808 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.191604 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.192205 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-724rr" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.193862 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.198541 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.199302 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.276265 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.277574 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.282743 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.283165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbxz\" (UniqueName: \"kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.283292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.293157 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.384069 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.384119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.384161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48lp\" (UniqueName: \"kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.384206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbxz\" (UniqueName: \"kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.384228 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.385450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.407682 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbxz\" (UniqueName: \"kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz\") pod \"dnsmasq-dns-675f4bcbfc-58zhk\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.485568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48lp\" (UniqueName: \"kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.485652 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.485691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.486525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.486762 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.503266 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.511986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48lp\" (UniqueName: \"kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp\") pod \"dnsmasq-dns-78dd6ddcc-pztt4\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.594178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.898441 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.903719 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:09:47 crc kubenswrapper[4751]: I1123 04:09:47.970145 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:09:47 crc kubenswrapper[4751]: W1123 04:09:47.974446 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4110513f_8179_4eca_9201_848078c90ad9.slice/crio-94fd8ee68a007628ef2290104af2b5dab10e0d86649a29a7053075e1d93e1447 WatchSource:0}: Error finding container 94fd8ee68a007628ef2290104af2b5dab10e0d86649a29a7053075e1d93e1447: Status 404 returned error can't find the container with id 94fd8ee68a007628ef2290104af2b5dab10e0d86649a29a7053075e1d93e1447 Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.523276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" event={"ID":"08c86b13-08b1-44fe-87c6-8981a6a082bc","Type":"ContainerStarted","Data":"e3821716f8cf1dfc3e1289a8f5feccb5fde1e34ca8b7a4739cfb2a0f908d8c28"} Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.525238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" event={"ID":"4110513f-8179-4eca-9201-848078c90ad9","Type":"ContainerStarted","Data":"94fd8ee68a007628ef2290104af2b5dab10e0d86649a29a7053075e1d93e1447"} Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.833655 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.850954 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.852244 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:48 crc kubenswrapper[4751]: I1123 04:09:48.864169 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.013936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.014008 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8hgd\" (UniqueName: \"kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.014045 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.097628 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.120257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8hgd\" (UniqueName: \"kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.120430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.120517 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.121327 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.125176 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.130689 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.132097 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.137816 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.164616 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8hgd\" (UniqueName: \"kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd\") pod \"dnsmasq-dns-666b6646f7-jfjzm\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.175789 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.223334 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.223400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.223444 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26l4\" (UniqueName: \"kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.324607 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.324934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.324983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26l4\" (UniqueName: \"kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.325609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.325957 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.343790 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26l4\" (UniqueName: \"kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4\") pod \"dnsmasq-dns-57d769cc4f-wwcpd\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.479637 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.655880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.920399 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:09:49 crc kubenswrapper[4751]: W1123 04:09:49.938172 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b99bd6e_67a2_46ea_b4d8_c5fc4df83bb8.slice/crio-3d50b1a3628941e4220e158774a43ef21759c95be84c7e53e1f98114651d0828 WatchSource:0}: Error finding container 3d50b1a3628941e4220e158774a43ef21759c95be84c7e53e1f98114651d0828: Status 404 returned error can't find the container with id 3d50b1a3628941e4220e158774a43ef21759c95be84c7e53e1f98114651d0828 Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.990271 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.996852 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:09:49 crc kubenswrapper[4751]: I1123 04:09:49.999892 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.000056 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.001543 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.001633 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.001721 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.010447 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.010634 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.010739 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6hxm9" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.135801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.135868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.135926 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.135997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136295 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136510 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136580 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jht2m\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.136650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237800 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237919 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.237978 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jht2m\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.238001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.238020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.238041 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.238072 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.238472 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.240060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.240472 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.241945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.243187 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.248975 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.252090 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.261023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.261552 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.261870 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262019 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pbjgn" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262117 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262166 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262289 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262383 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.262512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.268586 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.271958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.272473 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.272678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jht2m\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.272830 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.273234 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.330716 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339242 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7bc6\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339360 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339376 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339456 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339574 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339593 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.339633 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7bc6\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440740 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440766 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.440824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.441068 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.441837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.442026 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.442166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.442493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.442499 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.443958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.444190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.445477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.446535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.468820 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7bc6\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.476630 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.557733 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" event={"ID":"cc691dd5-0edf-472e-a953-9de2478572fb","Type":"ContainerStarted","Data":"1edf70deb971782a1aea8cfa03040bd0b49753f3ca18cfc2f83e00b1e800ab37"} Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.559318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" event={"ID":"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8","Type":"ContainerStarted","Data":"3d50b1a3628941e4220e158774a43ef21759c95be84c7e53e1f98114651d0828"} Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.596589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:09:50 crc kubenswrapper[4751]: I1123 04:09:50.945952 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:09:50 crc kubenswrapper[4751]: W1123 04:09:50.956941 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85de7e79_bbdf_4a3c_83d1_5a3977844a72.slice/crio-a0e210fd40de7d49f53ae43c23edc7946a3ba7dbe7b6ab340b0107754e656e65 WatchSource:0}: Error finding container a0e210fd40de7d49f53ae43c23edc7946a3ba7dbe7b6ab340b0107754e656e65: Status 404 returned error can't find the container with id a0e210fd40de7d49f53ae43c23edc7946a3ba7dbe7b6ab340b0107754e656e65 Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.234687 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:09:51 crc kubenswrapper[4751]: W1123 04:09:51.250786 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3885484b_1988_4a56_9b08_7848d614be82.slice/crio-a5ac60ed33bd695b8fbf0a196eebffa8e5e6e05098b8a1d7bdfe34181248d8dd WatchSource:0}: Error finding container a5ac60ed33bd695b8fbf0a196eebffa8e5e6e05098b8a1d7bdfe34181248d8dd: Status 404 returned error can't find the container with id a5ac60ed33bd695b8fbf0a196eebffa8e5e6e05098b8a1d7bdfe34181248d8dd Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.568200 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerStarted","Data":"a0e210fd40de7d49f53ae43c23edc7946a3ba7dbe7b6ab340b0107754e656e65"} Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.569734 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerStarted","Data":"a5ac60ed33bd695b8fbf0a196eebffa8e5e6e05098b8a1d7bdfe34181248d8dd"} Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.946681 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.950404 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.952978 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q4nng" Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.954007 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.954286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.954482 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.955054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 04:09:51 crc kubenswrapper[4751]: I1123 04:09:51.963234 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086247 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086617 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086709 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.086782 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbs\" (UniqueName: \"kubernetes.io/projected/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kube-api-access-nbrbs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188441 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188560 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.188577 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrbs\" (UniqueName: \"kubernetes.io/projected/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kube-api-access-nbrbs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.189574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.189830 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.189917 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.190235 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.190485 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.204903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.215827 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.221426 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrbs\" (UniqueName: \"kubernetes.io/projected/f73a5c1f-fac1-4b2d-9611-819ac8ebd57a-kube-api-access-nbrbs\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.245524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a\") " pod="openstack/openstack-galera-0" Nov 23 04:09:52 crc kubenswrapper[4751]: I1123 04:09:52.306722 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.160995 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.162911 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.167041 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.186502 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tlqdg" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.186761 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.187879 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.187910 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205437 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rhdm\" (UniqueName: \"kubernetes.io/projected/44c38b4f-095f-46ec-8a95-d7335e696f1b-kube-api-access-9rhdm\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205553 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205585 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205611 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205627 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.205652 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306259 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rhdm\" (UniqueName: \"kubernetes.io/projected/44c38b4f-095f-46ec-8a95-d7335e696f1b-kube-api-access-9rhdm\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306312 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306356 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306386 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306412 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306449 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.306498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.307661 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.308640 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.313944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.314634 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.316859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.330309 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44c38b4f-095f-46ec-8a95-d7335e696f1b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.336800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44c38b4f-095f-46ec-8a95-d7335e696f1b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.340203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rhdm\" (UniqueName: \"kubernetes.io/projected/44c38b4f-095f-46ec-8a95-d7335e696f1b-kube-api-access-9rhdm\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.353229 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"44c38b4f-095f-46ec-8a95-d7335e696f1b\") " pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.477011 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.478126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.479965 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.482230 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rvxc9" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.482509 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.503452 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.510573 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.511986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.512045 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-config-data\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.512121 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtcnc\" (UniqueName: \"kubernetes.io/projected/3125267d-8f09-4e74-90e2-a8f85e538b86-kube-api-access-vtcnc\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.513980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-kolla-config\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.514022 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.615237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtcnc\" (UniqueName: \"kubernetes.io/projected/3125267d-8f09-4e74-90e2-a8f85e538b86-kube-api-access-vtcnc\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.615288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-kolla-config\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.615314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.615360 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.615397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-config-data\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.617159 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-kolla-config\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.618285 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3125267d-8f09-4e74-90e2-a8f85e538b86-config-data\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.622954 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.626225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3125267d-8f09-4e74-90e2-a8f85e538b86-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.633067 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtcnc\" (UniqueName: \"kubernetes.io/projected/3125267d-8f09-4e74-90e2-a8f85e538b86-kube-api-access-vtcnc\") pod \"memcached-0\" (UID: \"3125267d-8f09-4e74-90e2-a8f85e538b86\") " pod="openstack/memcached-0" Nov 23 04:09:53 crc kubenswrapper[4751]: I1123 04:09:53.821311 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.156434 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.157331 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.161110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-spz47" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.167154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.344481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxsb\" (UniqueName: \"kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb\") pod \"kube-state-metrics-0\" (UID: \"82dca28c-075e-461a-9404-8298cce5588d\") " pod="openstack/kube-state-metrics-0" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.448610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrxsb\" (UniqueName: \"kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb\") pod \"kube-state-metrics-0\" (UID: \"82dca28c-075e-461a-9404-8298cce5588d\") " pod="openstack/kube-state-metrics-0" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.466270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrxsb\" (UniqueName: \"kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb\") pod \"kube-state-metrics-0\" (UID: \"82dca28c-075e-461a-9404-8298cce5588d\") " pod="openstack/kube-state-metrics-0" Nov 23 04:09:55 crc kubenswrapper[4751]: I1123 04:09:55.485514 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.736448 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x65b7"] Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.738551 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.743681 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x65b7"] Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.744936 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9hf6v" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.746066 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.746095 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.795207 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-26bzb"] Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.796707 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.807483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-26bzb"] Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-scripts\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11513e97-ce99-4112-bf99-386d0074fc15-scripts\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903222 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfhj\" (UniqueName: \"kubernetes.io/projected/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-kube-api-access-gxfhj\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7f4p\" (UniqueName: \"kubernetes.io/projected/11513e97-ce99-4112-bf99-386d0074fc15-kube-api-access-s7f4p\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903320 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-ovn-controller-tls-certs\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903360 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-lib\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-log\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-etc-ovs\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903589 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-log-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-run\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:58 crc kubenswrapper[4751]: I1123 04:09:58.903727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-combined-ca-bundle\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11513e97-ce99-4112-bf99-386d0074fc15-scripts\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004790 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxfhj\" (UniqueName: \"kubernetes.io/projected/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-kube-api-access-gxfhj\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7f4p\" (UniqueName: \"kubernetes.io/projected/11513e97-ce99-4112-bf99-386d0074fc15-kube-api-access-s7f4p\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.004950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-ovn-controller-tls-certs\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.005368 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.005466 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-run-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.005558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-lib\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.005798 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-lib\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-log\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006171 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-log\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-etc-ovs\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006500 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-etc-ovs\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-log-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-run\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11513e97-ce99-4112-bf99-386d0074fc15-var-run\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-var-log-ovn\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.006729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-combined-ca-bundle\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.007128 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11513e97-ce99-4112-bf99-386d0074fc15-scripts\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.007319 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-scripts\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.009846 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-scripts\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.014042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-ovn-controller-tls-certs\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.015863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-combined-ca-bundle\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.021990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxfhj\" (UniqueName: \"kubernetes.io/projected/4e8bfa9a-1b92-428e-a443-21ccb190a5bd-kube-api-access-gxfhj\") pod \"ovn-controller-x65b7\" (UID: \"4e8bfa9a-1b92-428e-a443-21ccb190a5bd\") " pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.042871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7f4p\" (UniqueName: \"kubernetes.io/projected/11513e97-ce99-4112-bf99-386d0074fc15-kube-api-access-s7f4p\") pod \"ovn-controller-ovs-26bzb\" (UID: \"11513e97-ce99-4112-bf99-386d0074fc15\") " pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.062783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7" Nov 23 04:09:59 crc kubenswrapper[4751]: I1123 04:09:59.110561 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.161681 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.166605 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.171208 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.171222 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.171272 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.171288 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.171769 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7q9bw" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.187187 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.344540 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-config\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.344623 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.344776 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.344836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrdq\" (UniqueName: \"kubernetes.io/projected/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-kube-api-access-hbrdq\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.344864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.345027 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.345093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.345123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447184 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447373 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-config\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447440 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447539 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbrdq\" (UniqueName: \"kubernetes.io/projected/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-kube-api-access-hbrdq\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447576 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.447680 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.448180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.448717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-config\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.448740 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.456558 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.458611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.465124 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.478650 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.482751 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbrdq\" (UniqueName: \"kubernetes.io/projected/eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28-kube-api-access-hbrdq\") pod \"ovsdbserver-nb-0\" (UID: \"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28\") " pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:01 crc kubenswrapper[4751]: I1123 04:10:01.494301 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.887517 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.889218 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.891498 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ntlqd" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.891959 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.892369 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.892718 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 04:10:02 crc kubenswrapper[4751]: I1123 04:10:02.915738 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.072602 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hk5x\" (UniqueName: \"kubernetes.io/projected/d49c307d-c4e7-412a-9506-71b93c1a1557-kube-api-access-9hk5x\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.072673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.072719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-config\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.072917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.073137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.073405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.073491 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.073600 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175401 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hk5x\" (UniqueName: \"kubernetes.io/projected/d49c307d-c4e7-412a-9506-71b93c1a1557-kube-api-access-9hk5x\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175640 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-config\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175721 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.175768 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.177643 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.178901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-config\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.179588 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d49c307d-c4e7-412a-9506-71b93c1a1557-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.182675 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.183262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.184405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d49c307d-c4e7-412a-9506-71b93c1a1557-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.204789 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hk5x\" (UniqueName: \"kubernetes.io/projected/d49c307d-c4e7-412a-9506-71b93c1a1557-kube-api-access-9hk5x\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.212998 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d49c307d-c4e7-412a-9506-71b93c1a1557\") " pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:03 crc kubenswrapper[4751]: I1123 04:10:03.217854 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:05 crc kubenswrapper[4751]: E1123 04:10:05.899817 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 23 04:10:05 crc kubenswrapper[4751]: E1123 04:10:05.900720 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7bc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(3885484b-1988-4a56-9b08-7848d614be82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:05 crc kubenswrapper[4751]: E1123 04:10:05.902958 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3885484b-1988-4a56-9b08-7848d614be82" Nov 23 04:10:06 crc kubenswrapper[4751]: E1123 04:10:06.717999 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3885484b-1988-4a56-9b08-7848d614be82" Nov 23 04:10:08 crc kubenswrapper[4751]: I1123 04:10:08.115428 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:10:08 crc kubenswrapper[4751]: I1123 04:10:08.115492 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.105563 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.106203 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jht2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(85de7e79-bbdf-4a3c-83d1-5a3977844a72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.108601 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.750638 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.887579 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.887806 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s26l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-wwcpd_openstack(0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.889011 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" podUID="0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.903814 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.903940 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgbxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-58zhk_openstack(4110513f-8179-4eca-9201-848078c90ad9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.905986 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" podUID="4110513f-8179-4eca-9201-848078c90ad9" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.925881 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.926052 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jfjzm_openstack(cc691dd5-0edf-472e-a953-9de2478572fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.926945 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.927031 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w48lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-pztt4_openstack(08c86b13-08b1-44fe-87c6-8981a6a082bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.928173 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" podUID="08c86b13-08b1-44fe-87c6-8981a6a082bc" Nov 23 04:10:10 crc kubenswrapper[4751]: E1123 04:10:10.928262 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.409533 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.429206 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x65b7"] Nov 23 04:10:11 crc kubenswrapper[4751]: W1123 04:10:11.445911 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44c38b4f_095f_46ec_8a95_d7335e696f1b.slice/crio-aed4dbc3d401c1972beb09ad145938085739f3824f516cd584208f78964c19c6 WatchSource:0}: Error finding container aed4dbc3d401c1972beb09ad145938085739f3824f516cd584208f78964c19c6: Status 404 returned error can't find the container with id aed4dbc3d401c1972beb09ad145938085739f3824f516cd584208f78964c19c6 Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.452313 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.537167 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: W1123 04:10:11.595930 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82dca28c_075e_461a_9404_8298cce5588d.slice/crio-43f4ddf0b3f817208f4b3bb31bb765967eaf40f62fe36151e5c54f48548b0af3 WatchSource:0}: Error finding container 43f4ddf0b3f817208f4b3bb31bb765967eaf40f62fe36151e5c54f48548b0af3: Status 404 returned error can't find the container with id 43f4ddf0b3f817208f4b3bb31bb765967eaf40f62fe36151e5c54f48548b0af3 Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.599246 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: W1123 04:10:11.654834 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3125267d_8f09_4e74_90e2_a8f85e538b86.slice/crio-cdfd9cd51bc1f511a15f2c217999bcfc2ae91868ef8fbbbe7e777ac91e57088f WatchSource:0}: Error finding container cdfd9cd51bc1f511a15f2c217999bcfc2ae91868ef8fbbbe7e777ac91e57088f: Status 404 returned error can't find the container with id cdfd9cd51bc1f511a15f2c217999bcfc2ae91868ef8fbbbe7e777ac91e57088f Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.656710 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.740011 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.754771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d49c307d-c4e7-412a-9506-71b93c1a1557","Type":"ContainerStarted","Data":"53a0eae1a4a71c30948254cef882d4749017649413f7fdfd59573faa0b85e00d"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.755780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"82dca28c-075e-461a-9404-8298cce5588d","Type":"ContainerStarted","Data":"43f4ddf0b3f817208f4b3bb31bb765967eaf40f62fe36151e5c54f48548b0af3"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.756813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28","Type":"ContainerStarted","Data":"61025a3ffa869c261737bafe90c3dc16309cec504f5e36ecc9494a8c6d963705"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.757811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"44c38b4f-095f-46ec-8a95-d7335e696f1b","Type":"ContainerStarted","Data":"aed4dbc3d401c1972beb09ad145938085739f3824f516cd584208f78964c19c6"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.758779 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3125267d-8f09-4e74-90e2-a8f85e538b86","Type":"ContainerStarted","Data":"cdfd9cd51bc1f511a15f2c217999bcfc2ae91868ef8fbbbe7e777ac91e57088f"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.760309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x65b7" event={"ID":"4e8bfa9a-1b92-428e-a443-21ccb190a5bd","Type":"ContainerStarted","Data":"28da906fd61035174a0760cbf8aa22624a3f42882b06e72474851277941712d0"} Nov 23 04:10:11 crc kubenswrapper[4751]: I1123 04:10:11.761342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a","Type":"ContainerStarted","Data":"9ac2b0c6b229e875cc87892482a416cd97b7f1c6e8d618d1c20291d5d29db6d0"} Nov 23 04:10:11 crc kubenswrapper[4751]: E1123 04:10:11.764734 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" podUID="0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" Nov 23 04:10:11 crc kubenswrapper[4751]: E1123 04:10:11.764783 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.053718 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.151635 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w48lp\" (UniqueName: \"kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp\") pod \"08c86b13-08b1-44fe-87c6-8981a6a082bc\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.151710 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config\") pod \"08c86b13-08b1-44fe-87c6-8981a6a082bc\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.151775 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc\") pod \"08c86b13-08b1-44fe-87c6-8981a6a082bc\" (UID: \"08c86b13-08b1-44fe-87c6-8981a6a082bc\") " Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.152823 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config" (OuterVolumeSpecName: "config") pod "08c86b13-08b1-44fe-87c6-8981a6a082bc" (UID: "08c86b13-08b1-44fe-87c6-8981a6a082bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.152930 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08c86b13-08b1-44fe-87c6-8981a6a082bc" (UID: "08c86b13-08b1-44fe-87c6-8981a6a082bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.158550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp" (OuterVolumeSpecName: "kube-api-access-w48lp") pod "08c86b13-08b1-44fe-87c6-8981a6a082bc" (UID: "08c86b13-08b1-44fe-87c6-8981a6a082bc"). InnerVolumeSpecName "kube-api-access-w48lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.204875 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.253529 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w48lp\" (UniqueName: \"kubernetes.io/projected/08c86b13-08b1-44fe-87c6-8981a6a082bc-kube-api-access-w48lp\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.253568 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.253581 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c86b13-08b1-44fe-87c6-8981a6a082bc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.354880 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config\") pod \"4110513f-8179-4eca-9201-848078c90ad9\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.355315 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config" (OuterVolumeSpecName: "config") pod "4110513f-8179-4eca-9201-848078c90ad9" (UID: "4110513f-8179-4eca-9201-848078c90ad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.355463 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbxz\" (UniqueName: \"kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz\") pod \"4110513f-8179-4eca-9201-848078c90ad9\" (UID: \"4110513f-8179-4eca-9201-848078c90ad9\") " Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.356193 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4110513f-8179-4eca-9201-848078c90ad9-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.358907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz" (OuterVolumeSpecName: "kube-api-access-kgbxz") pod "4110513f-8179-4eca-9201-848078c90ad9" (UID: "4110513f-8179-4eca-9201-848078c90ad9"). InnerVolumeSpecName "kube-api-access-kgbxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.449758 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-26bzb"] Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.457764 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbxz\" (UniqueName: \"kubernetes.io/projected/4110513f-8179-4eca-9201-848078c90ad9-kube-api-access-kgbxz\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:12 crc kubenswrapper[4751]: W1123 04:10:12.491930 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11513e97_ce99_4112_bf99_386d0074fc15.slice/crio-47c3e372b2df4bf2de2060793469cb47214811519aec735d024ffc7b07235835 WatchSource:0}: Error finding container 47c3e372b2df4bf2de2060793469cb47214811519aec735d024ffc7b07235835: Status 404 returned error can't find the container with id 47c3e372b2df4bf2de2060793469cb47214811519aec735d024ffc7b07235835 Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.768474 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" event={"ID":"08c86b13-08b1-44fe-87c6-8981a6a082bc","Type":"ContainerDied","Data":"e3821716f8cf1dfc3e1289a8f5feccb5fde1e34ca8b7a4739cfb2a0f908d8c28"} Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.768512 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-pztt4" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.770174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" event={"ID":"4110513f-8179-4eca-9201-848078c90ad9","Type":"ContainerDied","Data":"94fd8ee68a007628ef2290104af2b5dab10e0d86649a29a7053075e1d93e1447"} Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.770229 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58zhk" Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.772650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26bzb" event={"ID":"11513e97-ce99-4112-bf99-386d0074fc15","Type":"ContainerStarted","Data":"47c3e372b2df4bf2de2060793469cb47214811519aec735d024ffc7b07235835"} Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.803054 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.808815 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-pztt4"] Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.838070 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:10:12 crc kubenswrapper[4751]: I1123 04:10:12.838363 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58zhk"] Nov 23 04:10:14 crc kubenswrapper[4751]: I1123 04:10:14.661066 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c86b13-08b1-44fe-87c6-8981a6a082bc" path="/var/lib/kubelet/pods/08c86b13-08b1-44fe-87c6-8981a6a082bc/volumes" Nov 23 04:10:14 crc kubenswrapper[4751]: I1123 04:10:14.661745 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4110513f-8179-4eca-9201-848078c90ad9" path="/var/lib/kubelet/pods/4110513f-8179-4eca-9201-848078c90ad9/volumes" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.829612 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28","Type":"ContainerStarted","Data":"b31231c849fcfc73a28af54c90469c941944d6db1d08a8effca0bda70ceaa5f7"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.830976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"44c38b4f-095f-46ec-8a95-d7335e696f1b","Type":"ContainerStarted","Data":"9da0758693865ae3c5a6abad6fc2cf9fdd894c0718e632d5e39a093955191d27"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.832491 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3125267d-8f09-4e74-90e2-a8f85e538b86","Type":"ContainerStarted","Data":"516ee3d14653d9b1e8749e1e35dca13cbc45d835ba8e248e7c985e1595d86c0c"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.832618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.834038 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x65b7" event={"ID":"4e8bfa9a-1b92-428e-a443-21ccb190a5bd","Type":"ContainerStarted","Data":"0f16e472c17c1a09c41abd057ff3e11dc51b8e96eae9e85d6c5e9f53906b45e9"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.834225 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x65b7" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.835641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a","Type":"ContainerStarted","Data":"3a18c6c7605411c2264eff85b3b3dca209234116972deddbb0b76455da955374"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.837364 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26bzb" event={"ID":"11513e97-ce99-4112-bf99-386d0074fc15","Type":"ContainerStarted","Data":"58e6b5ae4d81043bcaa0c01c234a3ba868d959c556e2be78dc1f0487c810c487"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.839177 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d49c307d-c4e7-412a-9506-71b93c1a1557","Type":"ContainerStarted","Data":"f3794d3121b7945679c345c68cacfe0b9898aa56aea53dd8d4fb2b1bd1397fd9"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.840613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"82dca28c-075e-461a-9404-8298cce5588d","Type":"ContainerStarted","Data":"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8"} Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.840729 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.882938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.888409358 podStartE2EDuration="25.882920494s" podCreationTimestamp="2025-11-23 04:09:53 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.656687523 +0000 UTC m=+907.850358882" lastFinishedPulling="2025-11-23 04:10:17.651198659 +0000 UTC m=+913.844870018" observedRunningTime="2025-11-23 04:10:18.880300805 +0000 UTC m=+915.073972164" watchObservedRunningTime="2025-11-23 04:10:18.882920494 +0000 UTC m=+915.076591853" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.903391 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x65b7" podStartSLOduration=14.195705066 podStartE2EDuration="20.90337439s" podCreationTimestamp="2025-11-23 04:09:58 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.445019804 +0000 UTC m=+907.638691193" lastFinishedPulling="2025-11-23 04:10:18.152689128 +0000 UTC m=+914.346360517" observedRunningTime="2025-11-23 04:10:18.894826466 +0000 UTC m=+915.088497825" watchObservedRunningTime="2025-11-23 04:10:18.90337439 +0000 UTC m=+915.097045749" Nov 23 04:10:18 crc kubenswrapper[4751]: I1123 04:10:18.947955 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.279851761 podStartE2EDuration="23.947936519s" podCreationTimestamp="2025-11-23 04:09:55 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.597839259 +0000 UTC m=+907.791510618" lastFinishedPulling="2025-11-23 04:10:18.265924017 +0000 UTC m=+914.459595376" observedRunningTime="2025-11-23 04:10:18.941544821 +0000 UTC m=+915.135216180" watchObservedRunningTime="2025-11-23 04:10:18.947936519 +0000 UTC m=+915.141607888" Nov 23 04:10:19 crc kubenswrapper[4751]: I1123 04:10:19.848893 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerStarted","Data":"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9"} Nov 23 04:10:19 crc kubenswrapper[4751]: I1123 04:10:19.851724 4751 generic.go:334] "Generic (PLEG): container finished" podID="11513e97-ce99-4112-bf99-386d0074fc15" containerID="58e6b5ae4d81043bcaa0c01c234a3ba868d959c556e2be78dc1f0487c810c487" exitCode=0 Nov 23 04:10:19 crc kubenswrapper[4751]: I1123 04:10:19.851772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26bzb" event={"ID":"11513e97-ce99-4112-bf99-386d0074fc15","Type":"ContainerDied","Data":"58e6b5ae4d81043bcaa0c01c234a3ba868d959c556e2be78dc1f0487c810c487"} Nov 23 04:10:20 crc kubenswrapper[4751]: I1123 04:10:20.860515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26bzb" event={"ID":"11513e97-ce99-4112-bf99-386d0074fc15","Type":"ContainerStarted","Data":"46d6c307983b4bd39b33915bf3f6165d8ad6728e8d36dc3b760e5bd362aae2dc"} Nov 23 04:10:21 crc kubenswrapper[4751]: I1123 04:10:21.872830 4751 generic.go:334] "Generic (PLEG): container finished" podID="f73a5c1f-fac1-4b2d-9611-819ac8ebd57a" containerID="3a18c6c7605411c2264eff85b3b3dca209234116972deddbb0b76455da955374" exitCode=0 Nov 23 04:10:21 crc kubenswrapper[4751]: I1123 04:10:21.872923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a","Type":"ContainerDied","Data":"3a18c6c7605411c2264eff85b3b3dca209234116972deddbb0b76455da955374"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.887748 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d49c307d-c4e7-412a-9506-71b93c1a1557","Type":"ContainerStarted","Data":"7dd9974db821ff1f82769cfc52ba0894a42721a1f4fb09fe8f4ab40a1ed556e7"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.892519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28","Type":"ContainerStarted","Data":"aa2bc68adb60fdddcacd7d5f8eb12315e70a5ac993fa5cc7c317e9d54ad90a56"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.897450 4751 generic.go:334] "Generic (PLEG): container finished" podID="44c38b4f-095f-46ec-8a95-d7335e696f1b" containerID="9da0758693865ae3c5a6abad6fc2cf9fdd894c0718e632d5e39a093955191d27" exitCode=0 Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.897599 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"44c38b4f-095f-46ec-8a95-d7335e696f1b","Type":"ContainerDied","Data":"9da0758693865ae3c5a6abad6fc2cf9fdd894c0718e632d5e39a093955191d27"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.901565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f73a5c1f-fac1-4b2d-9611-819ac8ebd57a","Type":"ContainerStarted","Data":"30448d0f1fd2231c2d22a569101322503f4b12607000e851a4481aa925600e54"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.908692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-26bzb" event={"ID":"11513e97-ce99-4112-bf99-386d0074fc15","Type":"ContainerStarted","Data":"0661ff99ad9a30ef1ad534f40873e12d218e0d6546c81adc6835768c0fa2edf5"} Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.909223 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.909274 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.934291 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.903294109 podStartE2EDuration="21.93426129s" podCreationTimestamp="2025-11-23 04:10:01 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.544947574 +0000 UTC m=+907.738618943" lastFinishedPulling="2025-11-23 04:10:21.575914735 +0000 UTC m=+917.769586124" observedRunningTime="2025-11-23 04:10:22.919987916 +0000 UTC m=+919.113659305" watchObservedRunningTime="2025-11-23 04:10:22.93426129 +0000 UTC m=+919.127932689" Nov 23 04:10:22 crc kubenswrapper[4751]: I1123 04:10:22.983130 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.658027258 podStartE2EDuration="32.983101561s" podCreationTimestamp="2025-11-23 04:09:50 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.420014718 +0000 UTC m=+907.613686077" lastFinishedPulling="2025-11-23 04:10:17.745089011 +0000 UTC m=+913.938760380" observedRunningTime="2025-11-23 04:10:22.981642603 +0000 UTC m=+919.175314002" watchObservedRunningTime="2025-11-23 04:10:22.983101561 +0000 UTC m=+919.176772960" Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.004863 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.152997016 podStartE2EDuration="23.004845471s" podCreationTimestamp="2025-11-23 04:10:00 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.74124128 +0000 UTC m=+907.934912649" lastFinishedPulling="2025-11-23 04:10:21.593089745 +0000 UTC m=+917.786761104" observedRunningTime="2025-11-23 04:10:23.003031594 +0000 UTC m=+919.196702963" watchObservedRunningTime="2025-11-23 04:10:23.004845471 +0000 UTC m=+919.198516840" Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.033280 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-26bzb" podStartSLOduration=19.378750095 podStartE2EDuration="25.033253646s" podCreationTimestamp="2025-11-23 04:09:58 +0000 UTC" firstStartedPulling="2025-11-23 04:10:12.495665341 +0000 UTC m=+908.689336700" lastFinishedPulling="2025-11-23 04:10:18.150168892 +0000 UTC m=+914.343840251" observedRunningTime="2025-11-23 04:10:23.029558269 +0000 UTC m=+919.223229668" watchObservedRunningTime="2025-11-23 04:10:23.033253646 +0000 UTC m=+919.226925015" Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.218549 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.827021 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.923129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"44c38b4f-095f-46ec-8a95-d7335e696f1b","Type":"ContainerStarted","Data":"14d5d7d330e86c6e021e2abf38ff1a1c05c5847fad3da99c93f98d6d06852c79"} Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.927580 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerStarted","Data":"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9"} Nov 23 04:10:23 crc kubenswrapper[4751]: I1123 04:10:23.960034 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.25953104 podStartE2EDuration="31.959999985s" podCreationTimestamp="2025-11-23 04:09:52 +0000 UTC" firstStartedPulling="2025-11-23 04:10:11.451197606 +0000 UTC m=+907.644868995" lastFinishedPulling="2025-11-23 04:10:18.151666541 +0000 UTC m=+914.345337940" observedRunningTime="2025-11-23 04:10:23.95787451 +0000 UTC m=+920.151545909" watchObservedRunningTime="2025-11-23 04:10:23.959999985 +0000 UTC m=+920.153671384" Nov 23 04:10:24 crc kubenswrapper[4751]: I1123 04:10:24.219088 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:24 crc kubenswrapper[4751]: I1123 04:10:24.262895 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:24 crc kubenswrapper[4751]: I1123 04:10:24.993014 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.285495 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.317584 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.319411 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.322013 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.334355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.384027 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-v5k4s"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.385047 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.390144 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.402745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.402831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjt6\" (UniqueName: \"kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.402879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.402908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.455828 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-v5k4s"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.495076 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.498635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504408 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b9h\" (UniqueName: \"kubernetes.io/projected/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-kube-api-access-t2b9h\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovn-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-combined-ca-bundle\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504494 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504595 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjt6\" (UniqueName: \"kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504612 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovs-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504630 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-config\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504647 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.504673 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.505553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.506153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.506527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.533267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjt6\" (UniqueName: \"kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6\") pod \"dnsmasq-dns-6bc7876d45-jkg6p\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.536439 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.603660 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovs-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-config\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609369 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b9h\" (UniqueName: \"kubernetes.io/projected/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-kube-api-access-t2b9h\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609420 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovn-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609439 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609463 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-combined-ca-bundle\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.609498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.610938 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovs-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.611579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-config\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.611618 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-ovn-rundir\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.615224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.625883 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.628097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-combined-ca-bundle\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.634054 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.657735 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.660695 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b9h\" (UniqueName: \"kubernetes.io/projected/eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1-kube-api-access-t2b9h\") pod \"ovn-controller-metrics-v5k4s\" (UID: \"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1\") " pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.699542 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-v5k4s" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.711622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.711695 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.711861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpfj\" (UniqueName: \"kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.712020 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.814648 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.814704 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.814733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.814803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpfj\" (UniqueName: \"kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.815741 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.816232 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.816746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.834942 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpfj\" (UniqueName: \"kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj\") pod \"dnsmasq-dns-8cc7fc4dc-486m4\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.850751 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.879395 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.880883 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.883890 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.887583 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.916948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.917024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.917047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.917093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4pl\" (UniqueName: \"kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.917124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.948311 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc691dd5-0edf-472e-a953-9de2478572fb" containerID="59775a7a6fcde1267428b34326e4307016e904f9e59dbb0c86fd2cea66cb50c3" exitCode=0 Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.949254 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" event={"ID":"cc691dd5-0edf-472e-a953-9de2478572fb","Type":"ContainerDied","Data":"59775a7a6fcde1267428b34326e4307016e904f9e59dbb0c86fd2cea66cb50c3"} Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.949288 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.967703 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:25 crc kubenswrapper[4751]: I1123 04:10:25.994281 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.019920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.019952 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.020018 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4pl\" (UniqueName: \"kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.020054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.020150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.025135 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.025782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.026749 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.027300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.054577 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4pl\" (UniqueName: \"kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl\") pod \"dnsmasq-dns-b8fbc5445-ktpgz\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.104423 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.170241 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.193587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.199878 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.200111 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.200259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.203857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9c67j" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.214454 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.215234 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.229909 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s26l4\" (UniqueName: \"kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4\") pod \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.229967 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config\") pod \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.230418 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config" (OuterVolumeSpecName: "config") pod "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" (UID: "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.230966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc\") pod \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\" (UID: \"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231190 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-scripts\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231275 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231293 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231321 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvvf\" (UniqueName: \"kubernetes.io/projected/95cf38ba-5edd-4ff7-9213-966b6498df4e-kube-api-access-8dvvf\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231656 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-config\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231681 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231737 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.231544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" (UID: "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.237285 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4" (OuterVolumeSpecName: "kube-api-access-s26l4") pod "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" (UID: "0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8"). InnerVolumeSpecName "kube-api-access-s26l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.333493 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.333548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-scripts\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.333814 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.333891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.333977 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dvvf\" (UniqueName: \"kubernetes.io/projected/95cf38ba-5edd-4ff7-9213-966b6498df4e-kube-api-access-8dvvf\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-config\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334637 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s26l4\" (UniqueName: \"kubernetes.io/projected/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-kube-api-access-s26l4\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334707 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334743 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-scripts\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.334975 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.335279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cf38ba-5edd-4ff7-9213-966b6498df4e-config\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.342839 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.343066 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-v5k4s"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.344178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.345202 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cf38ba-5edd-4ff7-9213-966b6498df4e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.357463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dvvf\" (UniqueName: \"kubernetes.io/projected/95cf38ba-5edd-4ff7-9213-966b6498df4e-kube-api-access-8dvvf\") pod \"ovn-northd-0\" (UID: \"95cf38ba-5edd-4ff7-9213-966b6498df4e\") " pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.428442 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.525095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.543917 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config\") pod \"cc691dd5-0edf-472e-a953-9de2478572fb\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.543981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8hgd\" (UniqueName: \"kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd\") pod \"cc691dd5-0edf-472e-a953-9de2478572fb\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.544052 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc\") pod \"cc691dd5-0edf-472e-a953-9de2478572fb\" (UID: \"cc691dd5-0edf-472e-a953-9de2478572fb\") " Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.569771 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd" (OuterVolumeSpecName: "kube-api-access-n8hgd") pod "cc691dd5-0edf-472e-a953-9de2478572fb" (UID: "cc691dd5-0edf-472e-a953-9de2478572fb"). InnerVolumeSpecName "kube-api-access-n8hgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.582927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.608554 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config" (OuterVolumeSpecName: "config") pod "cc691dd5-0edf-472e-a953-9de2478572fb" (UID: "cc691dd5-0edf-472e-a953-9de2478572fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.634862 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc691dd5-0edf-472e-a953-9de2478572fb" (UID: "cc691dd5-0edf-472e-a953-9de2478572fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.645325 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.645370 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8hgd\" (UniqueName: \"kubernetes.io/projected/cc691dd5-0edf-472e-a953-9de2478572fb-kube-api-access-n8hgd\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.645384 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc691dd5-0edf-472e-a953-9de2478572fb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.714139 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:10:26 crc kubenswrapper[4751]: W1123 04:10:26.719478 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8854c4a0_71d3_4af3_9470_eee896ccb80c.slice/crio-e7ff296d83793088691e4081a60ae51d12d6ae2a20984e6183cb3d1b0cd81446 WatchSource:0}: Error finding container e7ff296d83793088691e4081a60ae51d12d6ae2a20984e6183cb3d1b0cd81446: Status 404 returned error can't find the container with id e7ff296d83793088691e4081a60ae51d12d6ae2a20984e6183cb3d1b0cd81446 Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.731463 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 23 04:10:26 crc kubenswrapper[4751]: E1123 04:10:26.731795 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" containerName="init" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.731807 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" containerName="init" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.731971 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" containerName="init" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.743025 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.750148 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.750463 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.750565 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.750848 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fdfp6" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.754910 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.850150 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.850500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-cache\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.850547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-lock\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.850588 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.850614 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjfp\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-kube-api-access-qtjfp\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952249 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjfp\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-kube-api-access-qtjfp\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-cache\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952383 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-lock\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: E1123 04:10:26.952501 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:26 crc kubenswrapper[4751]: E1123 04:10:26.952529 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:26 crc kubenswrapper[4751]: E1123 04:10:26.952577 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:27.45256046 +0000 UTC m=+923.646231809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952573 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952742 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-lock\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.952912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea516dc6-70bc-461c-8b7e-e269f9287da4-cache\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.959040 4751 generic.go:334] "Generic (PLEG): container finished" podID="0881042a-db01-4801-9323-9e3250f62c4c" containerID="7f9ea6ead2ef3a8d11071fd15bbfdb07671dcbddaf000763c97f1e4984148eed" exitCode=0 Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.959100 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" event={"ID":"0881042a-db01-4801-9323-9e3250f62c4c","Type":"ContainerDied","Data":"7f9ea6ead2ef3a8d11071fd15bbfdb07671dcbddaf000763c97f1e4984148eed"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.959125 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" event={"ID":"0881042a-db01-4801-9323-9e3250f62c4c","Type":"ContainerStarted","Data":"90d29c89fb6c862794324c96fd8987a559e09488e590ae5346488f470605ef0e"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.964025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-v5k4s" event={"ID":"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1","Type":"ContainerStarted","Data":"fcb836af9869812c74a91dcca3a72d4816bfeeeeffbbd510efcc2fff7cc127d4"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.964070 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-v5k4s" event={"ID":"eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1","Type":"ContainerStarted","Data":"919b405fdf1dcfeff895b141b1738987cfbbdb0deb440e0c20e5629ac9007f9d"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.966704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerStarted","Data":"016a420501c8d57a324a1fe0cff7340dd60731d9f9417de653c37c79e8ad43a7"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.966737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerStarted","Data":"e7ff296d83793088691e4081a60ae51d12d6ae2a20984e6183cb3d1b0cd81446"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.968781 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.968778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jfjzm" event={"ID":"cc691dd5-0edf-472e-a953-9de2478572fb","Type":"ContainerDied","Data":"1edf70deb971782a1aea8cfa03040bd0b49753f3ca18cfc2f83e00b1e800ab37"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.968924 4751 scope.go:117] "RemoveContainer" containerID="59775a7a6fcde1267428b34326e4307016e904f9e59dbb0c86fd2cea66cb50c3" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.972856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjfp\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-kube-api-access-qtjfp\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.972940 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b17b368-7447-4382-9d82-9dc2e3244009" containerID="23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117" exitCode=0 Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.973059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" event={"ID":"8b17b368-7447-4382-9d82-9dc2e3244009","Type":"ContainerDied","Data":"23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.973103 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" event={"ID":"8b17b368-7447-4382-9d82-9dc2e3244009","Type":"ContainerStarted","Data":"0ff3117ff1a609885c975b315e5c429b4129510b7d7d464f5355d1cdaf69da4e"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.977918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.979008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" event={"ID":"0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8","Type":"ContainerDied","Data":"3d50b1a3628941e4220e158774a43ef21759c95be84c7e53e1f98114651d0828"} Nov 23 04:10:26 crc kubenswrapper[4751]: I1123 04:10:26.979309 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wwcpd" Nov 23 04:10:27 crc kubenswrapper[4751]: W1123 04:10:27.069327 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95cf38ba_5edd_4ff7_9213_966b6498df4e.slice/crio-622723b19c7e01c86000569944a6881ae05a0274417caab2f4e7c8a2218144ef WatchSource:0}: Error finding container 622723b19c7e01c86000569944a6881ae05a0274417caab2f4e7c8a2218144ef: Status 404 returned error can't find the container with id 622723b19c7e01c86000569944a6881ae05a0274417caab2f4e7c8a2218144ef Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.080125 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-v5k4s" podStartSLOduration=2.080087404 podStartE2EDuration="2.080087404s" podCreationTimestamp="2025-11-23 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:10:27.052680455 +0000 UTC m=+923.246351814" watchObservedRunningTime="2025-11-23 04:10:27.080087404 +0000 UTC m=+923.273758763" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.080917 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.106059 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.110764 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jfjzm"] Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.161607 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.187220 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wwcpd"] Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.297730 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.358176 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb\") pod \"0881042a-db01-4801-9323-9e3250f62c4c\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.358565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config\") pod \"0881042a-db01-4801-9323-9e3250f62c4c\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.358979 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjt6\" (UniqueName: \"kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6\") pod \"0881042a-db01-4801-9323-9e3250f62c4c\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.359170 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc\") pod \"0881042a-db01-4801-9323-9e3250f62c4c\" (UID: \"0881042a-db01-4801-9323-9e3250f62c4c\") " Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.369472 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6" (OuterVolumeSpecName: "kube-api-access-brjt6") pod "0881042a-db01-4801-9323-9e3250f62c4c" (UID: "0881042a-db01-4801-9323-9e3250f62c4c"). InnerVolumeSpecName "kube-api-access-brjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.376320 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config" (OuterVolumeSpecName: "config") pod "0881042a-db01-4801-9323-9e3250f62c4c" (UID: "0881042a-db01-4801-9323-9e3250f62c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.377269 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0881042a-db01-4801-9323-9e3250f62c4c" (UID: "0881042a-db01-4801-9323-9e3250f62c4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.383661 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0881042a-db01-4801-9323-9e3250f62c4c" (UID: "0881042a-db01-4801-9323-9e3250f62c4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:27 crc kubenswrapper[4751]: E1123 04:10:27.461401 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:27 crc kubenswrapper[4751]: E1123 04:10:27.461458 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:27 crc kubenswrapper[4751]: E1123 04:10:27.461535 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:28.461494654 +0000 UTC m=+924.655166013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.461405 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.461762 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.461777 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brjt6\" (UniqueName: \"kubernetes.io/projected/0881042a-db01-4801-9323-9e3250f62c4c-kube-api-access-brjt6\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.461790 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.461799 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0881042a-db01-4801-9323-9e3250f62c4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:27 crc kubenswrapper[4751]: E1123 04:10:27.956737 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:44278->38.102.83.50:34905: write tcp 38.102.83.50:44278->38.102.83.50:34905: write: broken pipe Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.992613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" event={"ID":"0881042a-db01-4801-9323-9e3250f62c4c","Type":"ContainerDied","Data":"90d29c89fb6c862794324c96fd8987a559e09488e590ae5346488f470605ef0e"} Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.992677 4751 scope.go:117] "RemoveContainer" containerID="7f9ea6ead2ef3a8d11071fd15bbfdb07671dcbddaf000763c97f1e4984148eed" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.992787 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-jkg6p" Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.995866 4751 generic.go:334] "Generic (PLEG): container finished" podID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerID="016a420501c8d57a324a1fe0cff7340dd60731d9f9417de653c37c79e8ad43a7" exitCode=0 Nov 23 04:10:27 crc kubenswrapper[4751]: I1123 04:10:27.995948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerDied","Data":"016a420501c8d57a324a1fe0cff7340dd60731d9f9417de653c37c79e8ad43a7"} Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.008188 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" event={"ID":"8b17b368-7447-4382-9d82-9dc2e3244009","Type":"ContainerStarted","Data":"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1"} Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.009356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.017696 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"95cf38ba-5edd-4ff7-9213-966b6498df4e","Type":"ContainerStarted","Data":"622723b19c7e01c86000569944a6881ae05a0274417caab2f4e7c8a2218144ef"} Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.076923 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" podStartSLOduration=3.076906211 podStartE2EDuration="3.076906211s" podCreationTimestamp="2025-11-23 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:10:28.051628548 +0000 UTC m=+924.245299907" watchObservedRunningTime="2025-11-23 04:10:28.076906211 +0000 UTC m=+924.270577570" Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.105413 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.105642 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-jkg6p"] Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.482106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:28 crc kubenswrapper[4751]: E1123 04:10:28.482319 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:28 crc kubenswrapper[4751]: E1123 04:10:28.482362 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:28 crc kubenswrapper[4751]: E1123 04:10:28.482419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:30.482400693 +0000 UTC m=+926.676072052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.664959 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0881042a-db01-4801-9323-9e3250f62c4c" path="/var/lib/kubelet/pods/0881042a-db01-4801-9323-9e3250f62c4c/volumes" Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.667042 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8" path="/var/lib/kubelet/pods/0b99bd6e-67a2-46ea-b4d8-c5fc4df83bb8/volumes" Nov 23 04:10:28 crc kubenswrapper[4751]: I1123 04:10:28.667852 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc691dd5-0edf-472e-a953-9de2478572fb" path="/var/lib/kubelet/pods/cc691dd5-0edf-472e-a953-9de2478572fb/volumes" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.523219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:30 crc kubenswrapper[4751]: E1123 04:10:30.523515 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:30 crc kubenswrapper[4751]: E1123 04:10:30.523947 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:30 crc kubenswrapper[4751]: E1123 04:10:30.524031 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:34.524004073 +0000 UTC m=+930.717675462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.720273 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-d526x"] Nov 23 04:10:30 crc kubenswrapper[4751]: E1123 04:10:30.720834 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0881042a-db01-4801-9323-9e3250f62c4c" containerName="init" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.720867 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0881042a-db01-4801-9323-9e3250f62c4c" containerName="init" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.721153 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0881042a-db01-4801-9323-9e3250f62c4c" containerName="init" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.721952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.725874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.726022 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.726067 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.734624 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-d526x"] Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.837909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.837951 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.838010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.838029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.838070 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.838086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5bh\" (UniqueName: \"kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.838106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.939871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.939919 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.939978 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940010 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940254 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5bh\" (UniqueName: \"kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940546 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.940945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.941825 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.946102 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.946123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.946504 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:30 crc kubenswrapper[4751]: I1123 04:10:30.954643 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5bh\" (UniqueName: \"kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh\") pod \"swift-ring-rebalance-d526x\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:31 crc kubenswrapper[4751]: I1123 04:10:31.046418 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:31 crc kubenswrapper[4751]: I1123 04:10:31.501718 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-d526x"] Nov 23 04:10:31 crc kubenswrapper[4751]: W1123 04:10:31.510574 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc27467d_f028_4378_8e74_84b22dbc0048.slice/crio-bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac WatchSource:0}: Error finding container bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac: Status 404 returned error can't find the container with id bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac Nov 23 04:10:32 crc kubenswrapper[4751]: I1123 04:10:32.057754 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-d526x" event={"ID":"cc27467d-f028-4378-8e74-84b22dbc0048","Type":"ContainerStarted","Data":"bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac"} Nov 23 04:10:32 crc kubenswrapper[4751]: I1123 04:10:32.307080 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 04:10:32 crc kubenswrapper[4751]: I1123 04:10:32.307136 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 04:10:32 crc kubenswrapper[4751]: I1123 04:10:32.392588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.192578 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.508676 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vhzdn"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.510065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.513448 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.513728 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.519616 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-61c3-account-create-x5dz4"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.520658 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.522604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.539488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vhzdn"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.542775 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-61c3-account-create-x5dz4"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.593218 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.695484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.695626 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msnk\" (UniqueName: \"kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.695668 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxsb\" (UniqueName: \"kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.695755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.698395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.710828 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vntp5"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.711877 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vntp5" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.720812 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vntp5"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.797102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msnk\" (UniqueName: \"kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.797176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxsb\" (UniqueName: \"kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.797258 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.798451 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.805516 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-190a-account-create-nnc9s"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.806959 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.809519 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.809521 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-190a-account-create-nnc9s"] Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.825192 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxsb\" (UniqueName: \"kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb\") pod \"keystone-61c3-account-create-x5dz4\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.836825 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msnk\" (UniqueName: \"kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk\") pod \"keystone-db-create-vhzdn\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.845487 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.859720 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.900660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:33 crc kubenswrapper[4751]: I1123 04:10:33.900814 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6s8\" (UniqueName: \"kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.001950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpkkb\" (UniqueName: \"kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.002338 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d6s8\" (UniqueName: \"kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.002399 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.002497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.003917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.023925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pfqxb"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.025228 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.033221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d6s8\" (UniqueName: \"kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8\") pod \"placement-db-create-vntp5\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.033587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vntp5" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.047219 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pfqxb"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.107295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.107447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpkkb\" (UniqueName: \"kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.108522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.125685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpkkb\" (UniqueName: \"kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb\") pod \"placement-190a-account-create-nnc9s\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.129437 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b477-account-create-jzg52"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.130437 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.132886 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.143638 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b477-account-create-jzg52"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.211082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989qr\" (UniqueName: \"kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.211250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.312947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.313055 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989qr\" (UniqueName: \"kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.313162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.313198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4w67\" (UniqueName: \"kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.316048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.322472 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-61c3-account-create-x5dz4"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.332127 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vhzdn"] Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.335483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989qr\" (UniqueName: \"kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr\") pod \"glance-db-create-pfqxb\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.414422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4w67\" (UniqueName: \"kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.414495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.415572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.423850 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.425743 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.434733 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4w67\" (UniqueName: \"kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67\") pod \"glance-b477-account-create-jzg52\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.451284 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.481254 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vntp5"] Nov 23 04:10:34 crc kubenswrapper[4751]: W1123 04:10:34.487236 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce2939ea_7439_4036_8964_12f56d55b9e3.slice/crio-804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2 WatchSource:0}: Error finding container 804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2: Status 404 returned error can't find the container with id 804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2 Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.618265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:34 crc kubenswrapper[4751]: E1123 04:10:34.618677 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:34 crc kubenswrapper[4751]: E1123 04:10:34.618696 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:34 crc kubenswrapper[4751]: E1123 04:10:34.618748 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:42.618729977 +0000 UTC m=+938.812401336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:34 crc kubenswrapper[4751]: W1123 04:10:34.905176 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda96958e1_1100_47e1_a5e9_cf21ef25e4cb.slice/crio-e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3 WatchSource:0}: Error finding container e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3: Status 404 returned error can't find the container with id e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3 Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.912890 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-190a-account-create-nnc9s"] Nov 23 04:10:34 crc kubenswrapper[4751]: W1123 04:10:34.921576 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f950d97_cf8c_49c5_88a9_5ec34b3a71f2.slice/crio-91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e WatchSource:0}: Error finding container 91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e: Status 404 returned error can't find the container with id 91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e Nov 23 04:10:34 crc kubenswrapper[4751]: I1123 04:10:34.921995 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pfqxb"] Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.016660 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b477-account-create-jzg52"] Nov 23 04:10:35 crc kubenswrapper[4751]: W1123 04:10:35.028598 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bda3488_1b2c_4014_8eac_2abd8308af72.slice/crio-c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab WatchSource:0}: Error finding container c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab: Status 404 returned error can't find the container with id c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.105498 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vhzdn" event={"ID":"0834e6ad-8a18-427f-a4d6-94bfed0574bf","Type":"ContainerStarted","Data":"7c9a71616f44c5f43c447511864f6aa37c2fd3491c77f40228366bae69d5474d"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.110392 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerStarted","Data":"8a429cc614e5605e1cbb68c8f4a5da44c508d07227240f7e2855ffb0a588d945"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.112730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vntp5" event={"ID":"ce2939ea-7439-4036-8964-12f56d55b9e3","Type":"ContainerStarted","Data":"804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.113746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pfqxb" event={"ID":"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2","Type":"ContainerStarted","Data":"91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.114678 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61c3-account-create-x5dz4" event={"ID":"d3520f7c-fe55-4916-a621-82c88870e84f","Type":"ContainerStarted","Data":"0efa5e5bf261a2bd1e7ccc987089967088df175d7c948d476c5abc88966d3440"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.115579 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b477-account-create-jzg52" event={"ID":"4bda3488-1b2c-4014-8eac-2abd8308af72","Type":"ContainerStarted","Data":"c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.116695 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-190a-account-create-nnc9s" event={"ID":"a96958e1-1100-47e1-a5e9-cf21ef25e4cb","Type":"ContainerStarted","Data":"e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3"} Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.692831 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.796966 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 04:10:35 crc kubenswrapper[4751]: I1123 04:10:35.969510 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.124285 4751 generic.go:334] "Generic (PLEG): container finished" podID="d3520f7c-fe55-4916-a621-82c88870e84f" containerID="1e2e3bf3d1d24c14593e0480e840d7f1c6082486211b6d6f0b66539c7f09fcd6" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.124392 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61c3-account-create-x5dz4" event={"ID":"d3520f7c-fe55-4916-a621-82c88870e84f","Type":"ContainerDied","Data":"1e2e3bf3d1d24c14593e0480e840d7f1c6082486211b6d6f0b66539c7f09fcd6"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.125434 4751 generic.go:334] "Generic (PLEG): container finished" podID="4bda3488-1b2c-4014-8eac-2abd8308af72" containerID="caed715fdb8987c75e6718f4327c56d7c6621ff0623eff8c1af1b32336fc825f" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.125494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b477-account-create-jzg52" event={"ID":"4bda3488-1b2c-4014-8eac-2abd8308af72","Type":"ContainerDied","Data":"caed715fdb8987c75e6718f4327c56d7c6621ff0623eff8c1af1b32336fc825f"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.130272 4751 generic.go:334] "Generic (PLEG): container finished" podID="a96958e1-1100-47e1-a5e9-cf21ef25e4cb" containerID="2c8df6cc863eae0ce41d4eb9dcdf2a5cf4ab00ed8ca23385f8aa73f861310c87" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.130889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-190a-account-create-nnc9s" event={"ID":"a96958e1-1100-47e1-a5e9-cf21ef25e4cb","Type":"ContainerDied","Data":"2c8df6cc863eae0ce41d4eb9dcdf2a5cf4ab00ed8ca23385f8aa73f861310c87"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.133363 4751 generic.go:334] "Generic (PLEG): container finished" podID="0834e6ad-8a18-427f-a4d6-94bfed0574bf" containerID="94ccf6054e5174c7992f340b04aae05ee6c3d883d6f1ecab4cecf4bad63d8d5c" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.133469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vhzdn" event={"ID":"0834e6ad-8a18-427f-a4d6-94bfed0574bf","Type":"ContainerDied","Data":"94ccf6054e5174c7992f340b04aae05ee6c3d883d6f1ecab4cecf4bad63d8d5c"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.135707 4751 generic.go:334] "Generic (PLEG): container finished" podID="ce2939ea-7439-4036-8964-12f56d55b9e3" containerID="378a3d663fed2079099af1d536142d6a158cf16947e55feab5a0106c770fca45" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.135763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vntp5" event={"ID":"ce2939ea-7439-4036-8964-12f56d55b9e3","Type":"ContainerDied","Data":"378a3d663fed2079099af1d536142d6a158cf16947e55feab5a0106c770fca45"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.137425 4751 generic.go:334] "Generic (PLEG): container finished" podID="7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" containerID="997c76a04dce1895d439e37b162912bd81360b0769a1056028594f377ddebcc0" exitCode=0 Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.142290 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pfqxb" event={"ID":"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2","Type":"ContainerDied","Data":"997c76a04dce1895d439e37b162912bd81360b0769a1056028594f377ddebcc0"} Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.142372 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:36 crc kubenswrapper[4751]: I1123 04:10:36.215188 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" podStartSLOduration=11.215165116 podStartE2EDuration="11.215165116s" podCreationTimestamp="2025-11-23 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:10:36.204328712 +0000 UTC m=+932.398000081" watchObservedRunningTime="2025-11-23 04:10:36.215165116 +0000 UTC m=+932.408836475" Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.114552 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.115241 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.115286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.116211 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.116281 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01" gracePeriod=600 Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.168861 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"95cf38ba-5edd-4ff7-9213-966b6498df4e","Type":"ContainerStarted","Data":"f8a22b650451be48b074bfdfd1f68ac7085743dc2a8413602410a045a66512b4"} Nov 23 04:10:38 crc kubenswrapper[4751]: I1123 04:10:38.973511 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.019296 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.059031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.062766 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.071006 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vntp5" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.087841 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.106180 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxsb\" (UniqueName: \"kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb\") pod \"d3520f7c-fe55-4916-a621-82c88870e84f\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.106265 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts\") pod \"d3520f7c-fe55-4916-a621-82c88870e84f\" (UID: \"d3520f7c-fe55-4916-a621-82c88870e84f\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.106465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts\") pod \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.106535 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-989qr\" (UniqueName: \"kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr\") pod \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\" (UID: \"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.106947 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3520f7c-fe55-4916-a621-82c88870e84f" (UID: "d3520f7c-fe55-4916-a621-82c88870e84f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.107232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" (UID: "7f950d97-cf8c-49c5-88a9-5ec34b3a71f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.116865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb" (OuterVolumeSpecName: "kube-api-access-4zxsb") pod "d3520f7c-fe55-4916-a621-82c88870e84f" (UID: "d3520f7c-fe55-4916-a621-82c88870e84f"). InnerVolumeSpecName "kube-api-access-4zxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.119907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr" (OuterVolumeSpecName: "kube-api-access-989qr") pod "7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" (UID: "7f950d97-cf8c-49c5-88a9-5ec34b3a71f2"). InnerVolumeSpecName "kube-api-access-989qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.179588 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vntp5" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.179592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vntp5" event={"ID":"ce2939ea-7439-4036-8964-12f56d55b9e3","Type":"ContainerDied","Data":"804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.179643 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804de8be671a8ac4b4fa289e6a08cdd5a482a69a060b7bbb392c8bd668720ba2" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.181114 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pfqxb" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.181118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pfqxb" event={"ID":"7f950d97-cf8c-49c5-88a9-5ec34b3a71f2","Type":"ContainerDied","Data":"91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.181230 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bc0dabc6da9b29217c2a6b306b0d201b0e1551bb5455819197e1950e2e291e" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.182808 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-61c3-account-create-x5dz4" event={"ID":"d3520f7c-fe55-4916-a621-82c88870e84f","Type":"ContainerDied","Data":"0efa5e5bf261a2bd1e7ccc987089967088df175d7c948d476c5abc88966d3440"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.182829 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efa5e5bf261a2bd1e7ccc987089967088df175d7c948d476c5abc88966d3440" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.182832 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-61c3-account-create-x5dz4" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.184792 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b477-account-create-jzg52" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.185122 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b477-account-create-jzg52" event={"ID":"4bda3488-1b2c-4014-8eac-2abd8308af72","Type":"ContainerDied","Data":"c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.185151 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c02a0bed7a15e1a347a11c380dea8351765c70de968808ce45b677f66f1545ab" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.189549 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01" exitCode=0 Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.189626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.189649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.189665 4751 scope.go:117] "RemoveContainer" containerID="b4237ad3d8d19c6b3e554a7d8760278ed9d3d36fb9422fb2c3e4180d1664e464" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.193335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vhzdn" event={"ID":"0834e6ad-8a18-427f-a4d6-94bfed0574bf","Type":"ContainerDied","Data":"7c9a71616f44c5f43c447511864f6aa37c2fd3491c77f40228366bae69d5474d"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.193383 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c9a71616f44c5f43c447511864f6aa37c2fd3491c77f40228366bae69d5474d" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.193437 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vhzdn" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.195992 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"95cf38ba-5edd-4ff7-9213-966b6498df4e","Type":"ContainerStarted","Data":"3919f1ce49ec0e8c415364cc3c48f193b69b872206b22b209ba9ecf2fe151ebd"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.196400 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.197924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-d526x" event={"ID":"cc27467d-f028-4378-8e74-84b22dbc0048","Type":"ContainerStarted","Data":"081aff488910a6da10ac683aab506ffaa82171d87bb03cb24b853f7d51b7f100"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.200479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-190a-account-create-nnc9s" event={"ID":"a96958e1-1100-47e1-a5e9-cf21ef25e4cb","Type":"ContainerDied","Data":"e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3"} Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.200507 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-190a-account-create-nnc9s" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.200515 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7470eb73000989e28d2db904426b2bd713e8634b63d754314dae88641d044e3" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.207959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts\") pod \"ce2939ea-7439-4036-8964-12f56d55b9e3\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts\") pod \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpkkb\" (UniqueName: \"kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb\") pod \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208126 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d6s8\" (UniqueName: \"kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8\") pod \"ce2939ea-7439-4036-8964-12f56d55b9e3\" (UID: \"ce2939ea-7439-4036-8964-12f56d55b9e3\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts\") pod \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\" (UID: \"a96958e1-1100-47e1-a5e9-cf21ef25e4cb\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts\") pod \"4bda3488-1b2c-4014-8eac-2abd8308af72\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208276 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2msnk\" (UniqueName: \"kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk\") pod \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\" (UID: \"0834e6ad-8a18-427f-a4d6-94bfed0574bf\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208335 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4w67\" (UniqueName: \"kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67\") pod \"4bda3488-1b2c-4014-8eac-2abd8308af72\" (UID: \"4bda3488-1b2c-4014-8eac-2abd8308af72\") " Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208943 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxsb\" (UniqueName: \"kubernetes.io/projected/d3520f7c-fe55-4916-a621-82c88870e84f-kube-api-access-4zxsb\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208971 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3520f7c-fe55-4916-a621-82c88870e84f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.208984 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.209021 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-989qr\" (UniqueName: \"kubernetes.io/projected/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2-kube-api-access-989qr\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.210176 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a96958e1-1100-47e1-a5e9-cf21ef25e4cb" (UID: "a96958e1-1100-47e1-a5e9-cf21ef25e4cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.210545 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0834e6ad-8a18-427f-a4d6-94bfed0574bf" (UID: "0834e6ad-8a18-427f-a4d6-94bfed0574bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.210585 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bda3488-1b2c-4014-8eac-2abd8308af72" (UID: "4bda3488-1b2c-4014-8eac-2abd8308af72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.210790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce2939ea-7439-4036-8964-12f56d55b9e3" (UID: "ce2939ea-7439-4036-8964-12f56d55b9e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.213401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb" (OuterVolumeSpecName: "kube-api-access-kpkkb") pod "a96958e1-1100-47e1-a5e9-cf21ef25e4cb" (UID: "a96958e1-1100-47e1-a5e9-cf21ef25e4cb"). InnerVolumeSpecName "kube-api-access-kpkkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.213508 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk" (OuterVolumeSpecName: "kube-api-access-2msnk") pod "0834e6ad-8a18-427f-a4d6-94bfed0574bf" (UID: "0834e6ad-8a18-427f-a4d6-94bfed0574bf"). InnerVolumeSpecName "kube-api-access-2msnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.214246 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67" (OuterVolumeSpecName: "kube-api-access-j4w67") pod "4bda3488-1b2c-4014-8eac-2abd8308af72" (UID: "4bda3488-1b2c-4014-8eac-2abd8308af72"). InnerVolumeSpecName "kube-api-access-j4w67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.227275 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-d526x" podStartSLOduration=1.8979636690000001 podStartE2EDuration="9.227250043s" podCreationTimestamp="2025-11-23 04:10:30 +0000 UTC" firstStartedPulling="2025-11-23 04:10:31.513683483 +0000 UTC m=+927.707354872" lastFinishedPulling="2025-11-23 04:10:38.842969877 +0000 UTC m=+935.036641246" observedRunningTime="2025-11-23 04:10:39.222418026 +0000 UTC m=+935.416089395" watchObservedRunningTime="2025-11-23 04:10:39.227250043 +0000 UTC m=+935.420921412" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.227931 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8" (OuterVolumeSpecName: "kube-api-access-5d6s8") pod "ce2939ea-7439-4036-8964-12f56d55b9e3" (UID: "ce2939ea-7439-4036-8964-12f56d55b9e3"). InnerVolumeSpecName "kube-api-access-5d6s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.248762 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.8534948719999997 podStartE2EDuration="13.248741106s" podCreationTimestamp="2025-11-23 04:10:26 +0000 UTC" firstStartedPulling="2025-11-23 04:10:27.07116357 +0000 UTC m=+923.264834929" lastFinishedPulling="2025-11-23 04:10:36.466409804 +0000 UTC m=+932.660081163" observedRunningTime="2025-11-23 04:10:39.23898002 +0000 UTC m=+935.432651379" watchObservedRunningTime="2025-11-23 04:10:39.248741106 +0000 UTC m=+935.442412465" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310789 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bda3488-1b2c-4014-8eac-2abd8308af72-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310828 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2msnk\" (UniqueName: \"kubernetes.io/projected/0834e6ad-8a18-427f-a4d6-94bfed0574bf-kube-api-access-2msnk\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310842 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4w67\" (UniqueName: \"kubernetes.io/projected/4bda3488-1b2c-4014-8eac-2abd8308af72-kube-api-access-j4w67\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310854 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce2939ea-7439-4036-8964-12f56d55b9e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310868 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0834e6ad-8a18-427f-a4d6-94bfed0574bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310880 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpkkb\" (UniqueName: \"kubernetes.io/projected/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-kube-api-access-kpkkb\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310892 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d6s8\" (UniqueName: \"kubernetes.io/projected/ce2939ea-7439-4036-8964-12f56d55b9e3-kube-api-access-5d6s8\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:39 crc kubenswrapper[4751]: I1123 04:10:39.310905 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a96958e1-1100-47e1-a5e9-cf21ef25e4cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.216501 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.293734 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.294101 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="dnsmasq-dns" containerID="cri-o://7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1" gracePeriod=10 Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.707417 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.869201 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config\") pod \"8b17b368-7447-4382-9d82-9dc2e3244009\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.870037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc\") pod \"8b17b368-7447-4382-9d82-9dc2e3244009\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.870112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbpfj\" (UniqueName: \"kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj\") pod \"8b17b368-7447-4382-9d82-9dc2e3244009\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.870138 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb\") pod \"8b17b368-7447-4382-9d82-9dc2e3244009\" (UID: \"8b17b368-7447-4382-9d82-9dc2e3244009\") " Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.876327 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj" (OuterVolumeSpecName: "kube-api-access-zbpfj") pod "8b17b368-7447-4382-9d82-9dc2e3244009" (UID: "8b17b368-7447-4382-9d82-9dc2e3244009"). InnerVolumeSpecName "kube-api-access-zbpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.915292 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config" (OuterVolumeSpecName: "config") pod "8b17b368-7447-4382-9d82-9dc2e3244009" (UID: "8b17b368-7447-4382-9d82-9dc2e3244009"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.918069 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b17b368-7447-4382-9d82-9dc2e3244009" (UID: "8b17b368-7447-4382-9d82-9dc2e3244009"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.919023 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b17b368-7447-4382-9d82-9dc2e3244009" (UID: "8b17b368-7447-4382-9d82-9dc2e3244009"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.972646 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.972702 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbpfj\" (UniqueName: \"kubernetes.io/projected/8b17b368-7447-4382-9d82-9dc2e3244009-kube-api-access-zbpfj\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.972723 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:41 crc kubenswrapper[4751]: I1123 04:10:41.972742 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b17b368-7447-4382-9d82-9dc2e3244009-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.265150 4751 generic.go:334] "Generic (PLEG): container finished" podID="8b17b368-7447-4382-9d82-9dc2e3244009" containerID="7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1" exitCode=0 Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.265209 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" event={"ID":"8b17b368-7447-4382-9d82-9dc2e3244009","Type":"ContainerDied","Data":"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1"} Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.265307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" event={"ID":"8b17b368-7447-4382-9d82-9dc2e3244009","Type":"ContainerDied","Data":"0ff3117ff1a609885c975b315e5c429b4129510b7d7d464f5355d1cdaf69da4e"} Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.265236 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-486m4" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.265340 4751 scope.go:117] "RemoveContainer" containerID="7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.302313 4751 scope.go:117] "RemoveContainer" containerID="23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.303858 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.310706 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-486m4"] Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.329396 4751 scope.go:117] "RemoveContainer" containerID="7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1" Nov 23 04:10:42 crc kubenswrapper[4751]: E1123 04:10:42.329917 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1\": container with ID starting with 7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1 not found: ID does not exist" containerID="7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.329975 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1"} err="failed to get container status \"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1\": rpc error: code = NotFound desc = could not find container \"7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1\": container with ID starting with 7e07ed83b3a48a7ff4c126b1da35c012dab12ba10e711a1bd7e7e06a4ae12df1 not found: ID does not exist" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.330016 4751 scope.go:117] "RemoveContainer" containerID="23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117" Nov 23 04:10:42 crc kubenswrapper[4751]: E1123 04:10:42.334894 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117\": container with ID starting with 23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117 not found: ID does not exist" containerID="23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.334944 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117"} err="failed to get container status \"23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117\": rpc error: code = NotFound desc = could not find container \"23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117\": container with ID starting with 23b36d187a598849280e080bf3c99d0d7bc83970425eb05f459f144d7ca24117 not found: ID does not exist" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.655487 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" path="/var/lib/kubelet/pods/8b17b368-7447-4382-9d82-9dc2e3244009/volumes" Nov 23 04:10:42 crc kubenswrapper[4751]: I1123 04:10:42.685695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:42 crc kubenswrapper[4751]: E1123 04:10:42.685921 4751 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 04:10:42 crc kubenswrapper[4751]: E1123 04:10:42.685962 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 04:10:42 crc kubenswrapper[4751]: E1123 04:10:42.686020 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift podName:ea516dc6-70bc-461c-8b7e-e269f9287da4 nodeName:}" failed. No retries permitted until 2025-11-23 04:10:58.686003431 +0000 UTC m=+954.879674790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift") pod "swift-storage-0" (UID: "ea516dc6-70bc-461c-8b7e-e269f9287da4") : configmap "swift-ring-files" not found Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.357491 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-sp5s2"] Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358079 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358099 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358114 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce2939ea-7439-4036-8964-12f56d55b9e3" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358120 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce2939ea-7439-4036-8964-12f56d55b9e3" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358128 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3520f7c-fe55-4916-a621-82c88870e84f" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358134 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3520f7c-fe55-4916-a621-82c88870e84f" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358148 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="dnsmasq-dns" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358155 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="dnsmasq-dns" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358175 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bda3488-1b2c-4014-8eac-2abd8308af72" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bda3488-1b2c-4014-8eac-2abd8308af72" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358191 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0834e6ad-8a18-427f-a4d6-94bfed0574bf" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358201 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0834e6ad-8a18-427f-a4d6-94bfed0574bf" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358211 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96958e1-1100-47e1-a5e9-cf21ef25e4cb" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358217 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96958e1-1100-47e1-a5e9-cf21ef25e4cb" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: E1123 04:10:44.358225 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="init" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358231 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="init" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358420 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bda3488-1b2c-4014-8eac-2abd8308af72" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358440 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3520f7c-fe55-4916-a621-82c88870e84f" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358449 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce2939ea-7439-4036-8964-12f56d55b9e3" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358466 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96958e1-1100-47e1-a5e9-cf21ef25e4cb" containerName="mariadb-account-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358475 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358481 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b17b368-7447-4382-9d82-9dc2e3244009" containerName="dnsmasq-dns" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.358492 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0834e6ad-8a18-427f-a4d6-94bfed0574bf" containerName="mariadb-database-create" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.359019 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.362620 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.362938 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hp66g" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.369110 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sp5s2"] Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.516755 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.516870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.516928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9hx\" (UniqueName: \"kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.516963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.618250 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.618322 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9hx\" (UniqueName: \"kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.618385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.618485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.629783 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.630034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.631872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.658141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9hx\" (UniqueName: \"kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx\") pod \"glance-db-sync-sp5s2\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:44 crc kubenswrapper[4751]: I1123 04:10:44.681597 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sp5s2" Nov 23 04:10:45 crc kubenswrapper[4751]: I1123 04:10:45.206014 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sp5s2"] Nov 23 04:10:45 crc kubenswrapper[4751]: W1123 04:10:45.206484 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode010e2e6_3482_4511_8540_46aef4db130e.slice/crio-996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b WatchSource:0}: Error finding container 996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b: Status 404 returned error can't find the container with id 996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b Nov 23 04:10:45 crc kubenswrapper[4751]: I1123 04:10:45.294388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sp5s2" event={"ID":"e010e2e6-3482-4511-8540-46aef4db130e","Type":"ContainerStarted","Data":"996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b"} Nov 23 04:10:46 crc kubenswrapper[4751]: I1123 04:10:46.307508 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc27467d-f028-4378-8e74-84b22dbc0048" containerID="081aff488910a6da10ac683aab506ffaa82171d87bb03cb24b853f7d51b7f100" exitCode=0 Nov 23 04:10:46 crc kubenswrapper[4751]: I1123 04:10:46.307603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-d526x" event={"ID":"cc27467d-f028-4378-8e74-84b22dbc0048","Type":"ContainerDied","Data":"081aff488910a6da10ac683aab506ffaa82171d87bb03cb24b853f7d51b7f100"} Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.592677 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673697 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673771 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673842 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673878 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.673930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5bh\" (UniqueName: \"kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh\") pod \"cc27467d-f028-4378-8e74-84b22dbc0048\" (UID: \"cc27467d-f028-4378-8e74-84b22dbc0048\") " Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.675904 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.676132 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.685465 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.701797 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh" (OuterVolumeSpecName: "kube-api-access-jb5bh") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "kube-api-access-jb5bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.702752 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts" (OuterVolumeSpecName: "scripts") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.705192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.705757 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cc27467d-f028-4378-8e74-84b22dbc0048" (UID: "cc27467d-f028-4378-8e74-84b22dbc0048"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775813 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775840 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cc27467d-f028-4378-8e74-84b22dbc0048-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775851 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cc27467d-f028-4378-8e74-84b22dbc0048-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775860 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775869 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775877 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cc27467d-f028-4378-8e74-84b22dbc0048-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:47 crc kubenswrapper[4751]: I1123 04:10:47.775887 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5bh\" (UniqueName: \"kubernetes.io/projected/cc27467d-f028-4378-8e74-84b22dbc0048-kube-api-access-jb5bh\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:48 crc kubenswrapper[4751]: I1123 04:10:48.326917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-d526x" event={"ID":"cc27467d-f028-4378-8e74-84b22dbc0048","Type":"ContainerDied","Data":"bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac"} Nov 23 04:10:48 crc kubenswrapper[4751]: I1123 04:10:48.327505 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd01ba8e6b61fa2dd2a4e961b68f7e2f51cb95110444d171f01efb35ed1f6ac" Nov 23 04:10:48 crc kubenswrapper[4751]: I1123 04:10:48.327067 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d526x" Nov 23 04:10:49 crc kubenswrapper[4751]: I1123 04:10:49.112093 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x65b7" podUID="4e8bfa9a-1b92-428e-a443-21ccb190a5bd" containerName="ovn-controller" probeResult="failure" output=< Nov 23 04:10:49 crc kubenswrapper[4751]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 04:10:49 crc kubenswrapper[4751]: > Nov 23 04:10:51 crc kubenswrapper[4751]: I1123 04:10:51.606124 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 04:10:52 crc kubenswrapper[4751]: I1123 04:10:52.369227 4751 generic.go:334] "Generic (PLEG): container finished" podID="3885484b-1988-4a56-9b08-7848d614be82" containerID="7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9" exitCode=0 Nov 23 04:10:52 crc kubenswrapper[4751]: I1123 04:10:52.369268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerDied","Data":"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9"} Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.106993 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x65b7" podUID="4e8bfa9a-1b92-428e-a443-21ccb190a5bd" containerName="ovn-controller" probeResult="failure" output=< Nov 23 04:10:54 crc kubenswrapper[4751]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 04:10:54 crc kubenswrapper[4751]: > Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.162469 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.170747 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-26bzb" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.417963 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x65b7-config-ghzgr"] Nov 23 04:10:54 crc kubenswrapper[4751]: E1123 04:10:54.418443 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc27467d-f028-4378-8e74-84b22dbc0048" containerName="swift-ring-rebalance" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.418468 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc27467d-f028-4378-8e74-84b22dbc0048" containerName="swift-ring-rebalance" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.418764 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc27467d-f028-4378-8e74-84b22dbc0048" containerName="swift-ring-rebalance" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.419537 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.422973 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.431416 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x65b7-config-ghzgr"] Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504095 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504171 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzj6b\" (UniqueName: \"kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.504275 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606404 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606478 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzj6b\" (UniqueName: \"kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.606829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.607175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.607289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.607616 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.610536 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.639638 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzj6b\" (UniqueName: \"kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b\") pod \"ovn-controller-x65b7-config-ghzgr\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:54 crc kubenswrapper[4751]: I1123 04:10:54.751565 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.408642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerStarted","Data":"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35"} Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.409372 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.411674 4751 generic.go:334] "Generic (PLEG): container finished" podID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerID="c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9" exitCode=0 Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.411712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerDied","Data":"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9"} Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.477559 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.46631713 podStartE2EDuration="1m7.477538034s" podCreationTimestamp="2025-11-23 04:09:49 +0000 UTC" firstStartedPulling="2025-11-23 04:09:51.253761079 +0000 UTC m=+887.447432438" lastFinishedPulling="2025-11-23 04:10:18.264981983 +0000 UTC m=+914.458653342" observedRunningTime="2025-11-23 04:10:56.446584552 +0000 UTC m=+952.640255921" watchObservedRunningTime="2025-11-23 04:10:56.477538034 +0000 UTC m=+952.671209393" Nov 23 04:10:56 crc kubenswrapper[4751]: I1123 04:10:56.481937 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x65b7-config-ghzgr"] Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.420440 4751 generic.go:334] "Generic (PLEG): container finished" podID="81b35476-fdbc-4cc8-8606-ac2b5d39ae08" containerID="41d80b213856f906a9a5f575e3873e0a726abaa8898d570d4813e6b208ad9315" exitCode=0 Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.420695 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x65b7-config-ghzgr" event={"ID":"81b35476-fdbc-4cc8-8606-ac2b5d39ae08","Type":"ContainerDied","Data":"41d80b213856f906a9a5f575e3873e0a726abaa8898d570d4813e6b208ad9315"} Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.420722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x65b7-config-ghzgr" event={"ID":"81b35476-fdbc-4cc8-8606-ac2b5d39ae08","Type":"ContainerStarted","Data":"62ab91198be47bf5bd84b432ca08668f4c1df8fbefbd82ccc330553b6ac246fc"} Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.422776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerStarted","Data":"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace"} Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.423427 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.425832 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sp5s2" event={"ID":"e010e2e6-3482-4511-8540-46aef4db130e","Type":"ContainerStarted","Data":"ee757f37e94742d8a5595f19f30415132dc013c44eb807e3e9d9f9f1abcf38dd"} Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.490576 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371967.364218 podStartE2EDuration="1m9.490558096s" podCreationTimestamp="2025-11-23 04:09:48 +0000 UTC" firstStartedPulling="2025-11-23 04:09:50.95861868 +0000 UTC m=+887.152290039" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:10:57.487844354 +0000 UTC m=+953.681515753" watchObservedRunningTime="2025-11-23 04:10:57.490558096 +0000 UTC m=+953.684229455" Nov 23 04:10:57 crc kubenswrapper[4751]: I1123 04:10:57.493199 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-sp5s2" podStartSLOduration=2.60019302 podStartE2EDuration="13.493194035s" podCreationTimestamp="2025-11-23 04:10:44 +0000 UTC" firstStartedPulling="2025-11-23 04:10:45.208857679 +0000 UTC m=+941.402529048" lastFinishedPulling="2025-11-23 04:10:56.101858664 +0000 UTC m=+952.295530063" observedRunningTime="2025-11-23 04:10:57.458534396 +0000 UTC m=+953.652205755" watchObservedRunningTime="2025-11-23 04:10:57.493194035 +0000 UTC m=+953.686865394" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.777856 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.785923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea516dc6-70bc-461c-8b7e-e269f9287da4-etc-swift\") pod \"swift-storage-0\" (UID: \"ea516dc6-70bc-461c-8b7e-e269f9287da4\") " pod="openstack/swift-storage-0" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.840995 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.873735 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.981930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.981939 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982520 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982571 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982593 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzj6b\" (UniqueName: \"kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982654 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run\") pod \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\" (UID: \"81b35476-fdbc-4cc8-8606-ac2b5d39ae08\") " Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982764 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.982853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run" (OuterVolumeSpecName: "var-run") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.983163 4751 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.983170 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.983177 4751 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.983217 4751 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.983834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts" (OuterVolumeSpecName: "scripts") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:10:58 crc kubenswrapper[4751]: I1123 04:10:58.985684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b" (OuterVolumeSpecName: "kube-api-access-mzj6b") pod "81b35476-fdbc-4cc8-8606-ac2b5d39ae08" (UID: "81b35476-fdbc-4cc8-8606-ac2b5d39ae08"). InnerVolumeSpecName "kube-api-access-mzj6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.084401 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.084430 4751 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.084440 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzj6b\" (UniqueName: \"kubernetes.io/projected/81b35476-fdbc-4cc8-8606-ac2b5d39ae08-kube-api-access-mzj6b\") on node \"crc\" DevicePath \"\"" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.101911 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x65b7" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.413526 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 04:10:59 crc kubenswrapper[4751]: W1123 04:10:59.422641 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea516dc6_70bc_461c_8b7e_e269f9287da4.slice/crio-05064c2eb37a3bd9154622d5039bd32c94847ebb398464134e716a1ba2a31265 WatchSource:0}: Error finding container 05064c2eb37a3bd9154622d5039bd32c94847ebb398464134e716a1ba2a31265: Status 404 returned error can't find the container with id 05064c2eb37a3bd9154622d5039bd32c94847ebb398464134e716a1ba2a31265 Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.440443 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"05064c2eb37a3bd9154622d5039bd32c94847ebb398464134e716a1ba2a31265"} Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.441647 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x65b7-config-ghzgr" event={"ID":"81b35476-fdbc-4cc8-8606-ac2b5d39ae08","Type":"ContainerDied","Data":"62ab91198be47bf5bd84b432ca08668f4c1df8fbefbd82ccc330553b6ac246fc"} Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.441678 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ab91198be47bf5bd84b432ca08668f4c1df8fbefbd82ccc330553b6ac246fc" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.441690 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x65b7-config-ghzgr" Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.944412 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x65b7-config-ghzgr"] Nov 23 04:10:59 crc kubenswrapper[4751]: I1123 04:10:59.955738 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x65b7-config-ghzgr"] Nov 23 04:11:00 crc kubenswrapper[4751]: I1123 04:11:00.656574 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b35476-fdbc-4cc8-8606-ac2b5d39ae08" path="/var/lib/kubelet/pods/81b35476-fdbc-4cc8-8606-ac2b5d39ae08/volumes" Nov 23 04:11:04 crc kubenswrapper[4751]: I1123 04:11:04.508937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"4fbaea2755d0750d5b49b15a51127da27802801468f046e03116742ccbcee102"} Nov 23 04:11:04 crc kubenswrapper[4751]: I1123 04:11:04.509277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"2843ad18dbe0b31af1e51b991be90a63e20508d2c18e53f157e70bf9cd7773c2"} Nov 23 04:11:05 crc kubenswrapper[4751]: I1123 04:11:05.525590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"dc85989249c93c64eb114ca3963078298277f99b8c4e3989804ac0668f8c697d"} Nov 23 04:11:05 crc kubenswrapper[4751]: I1123 04:11:05.525660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"36a9f8cd35004cbe42f7d6301f7f2e1ca57d9161ae02a706058ecdfdbaa32628"} Nov 23 04:11:06 crc kubenswrapper[4751]: I1123 04:11:06.540781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"2c3c47397db792117f75a5c10b4f10558ce7725cf2d1c1c93b1b3ec4e77afc8f"} Nov 23 04:11:07 crc kubenswrapper[4751]: I1123 04:11:07.559579 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"71e6fae62c881bb52050d34d7db5e76f9ed28d1e0d57bdb647d79fb12a1b0122"} Nov 23 04:11:07 crc kubenswrapper[4751]: I1123 04:11:07.559988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"ce54f7515758d0292428680da9fa705da75771e36fca98965fc275f86305c28d"} Nov 23 04:11:07 crc kubenswrapper[4751]: I1123 04:11:07.560052 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"bb4d59fe7cd1707d29053e6787356645ec75513053b3c35205bd2fdf031168cb"} Nov 23 04:11:07 crc kubenswrapper[4751]: I1123 04:11:07.564452 4751 generic.go:334] "Generic (PLEG): container finished" podID="e010e2e6-3482-4511-8540-46aef4db130e" containerID="ee757f37e94742d8a5595f19f30415132dc013c44eb807e3e9d9f9f1abcf38dd" exitCode=0 Nov 23 04:11:07 crc kubenswrapper[4751]: I1123 04:11:07.564507 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sp5s2" event={"ID":"e010e2e6-3482-4511-8540-46aef4db130e","Type":"ContainerDied","Data":"ee757f37e94742d8a5595f19f30415132dc013c44eb807e3e9d9f9f1abcf38dd"} Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.579331 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"1031ed108bd10cb527067b268aa8fd7e9e744ced74a5dce679001cae5cf635fd"} Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.580465 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"d3fdfa62f728d0a5afdd6ed0637bde410dd45a4fdff5b62251fa77abc0e6122c"} Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.580483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"2c993624ccba1dabe2fb727a98b260077a2df0cad6ac0453d1503eac8a36b44f"} Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.908802 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sp5s2" Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.947436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle\") pod \"e010e2e6-3482-4511-8540-46aef4db130e\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.947473 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj9hx\" (UniqueName: \"kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx\") pod \"e010e2e6-3482-4511-8540-46aef4db130e\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.947610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data\") pod \"e010e2e6-3482-4511-8540-46aef4db130e\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.947646 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data\") pod \"e010e2e6-3482-4511-8540-46aef4db130e\" (UID: \"e010e2e6-3482-4511-8540-46aef4db130e\") " Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.952505 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx" (OuterVolumeSpecName: "kube-api-access-lj9hx") pod "e010e2e6-3482-4511-8540-46aef4db130e" (UID: "e010e2e6-3482-4511-8540-46aef4db130e"). InnerVolumeSpecName "kube-api-access-lj9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.957786 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e010e2e6-3482-4511-8540-46aef4db130e" (UID: "e010e2e6-3482-4511-8540-46aef4db130e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.971474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e010e2e6-3482-4511-8540-46aef4db130e" (UID: "e010e2e6-3482-4511-8540-46aef4db130e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:08 crc kubenswrapper[4751]: I1123 04:11:08.999111 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data" (OuterVolumeSpecName: "config-data") pod "e010e2e6-3482-4511-8540-46aef4db130e" (UID: "e010e2e6-3482-4511-8540-46aef4db130e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.049691 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.049877 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.049885 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e010e2e6-3482-4511-8540-46aef4db130e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.049894 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj9hx\" (UniqueName: \"kubernetes.io/projected/e010e2e6-3482-4511-8540-46aef4db130e-kube-api-access-lj9hx\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.600067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"e94e64e6bf47e23eb774ed16572526a5d7902c4c1da6af563c292613aa792a27"} Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.600132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"30323b90ac53d0f13ed3454386b93897a49a30cb4ddf5d2ae552c060243df2c7"} Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.600152 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"ff8f7f40e36a33fc498172fb014ea90568afb7c518178daac781e2060e0eba42"} Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.600169 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea516dc6-70bc-461c-8b7e-e269f9287da4","Type":"ContainerStarted","Data":"35bf091af7318e334c165e3d47d7282462f79cf175c9586d462980064a869603"} Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.602677 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sp5s2" event={"ID":"e010e2e6-3482-4511-8540-46aef4db130e","Type":"ContainerDied","Data":"996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b"} Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.602729 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996ed38fa33199676034b793f20c80f9adf75621ac97771ecd1d15fb18fcc70b" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.602823 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sp5s2" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.670443 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.100419516 podStartE2EDuration="44.670407361s" podCreationTimestamp="2025-11-23 04:10:25 +0000 UTC" firstStartedPulling="2025-11-23 04:10:59.424720209 +0000 UTC m=+955.618391568" lastFinishedPulling="2025-11-23 04:11:07.994708054 +0000 UTC m=+964.188379413" observedRunningTime="2025-11-23 04:11:09.652629114 +0000 UTC m=+965.846300543" watchObservedRunningTime="2025-11-23 04:11:09.670407361 +0000 UTC m=+965.864078760" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.975684 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-2sq8w"] Nov 23 04:11:09 crc kubenswrapper[4751]: E1123 04:11:09.976000 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b35476-fdbc-4cc8-8606-ac2b5d39ae08" containerName="ovn-config" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.976017 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b35476-fdbc-4cc8-8606-ac2b5d39ae08" containerName="ovn-config" Nov 23 04:11:09 crc kubenswrapper[4751]: E1123 04:11:09.976035 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e010e2e6-3482-4511-8540-46aef4db130e" containerName="glance-db-sync" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.976043 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e010e2e6-3482-4511-8540-46aef4db130e" containerName="glance-db-sync" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.976215 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b35476-fdbc-4cc8-8606-ac2b5d39ae08" containerName="ovn-config" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.976242 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e010e2e6-3482-4511-8540-46aef4db130e" containerName="glance-db-sync" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.977055 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:09 crc kubenswrapper[4751]: I1123 04:11:09.984102 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.002833 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-2sq8w"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.052751 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-2sq8w"] Nov 23 04:11:10 crc kubenswrapper[4751]: E1123 04:11:10.053277 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-c58w6 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" podUID="01471021-a895-40b1-a1d9-2714289a2a1e" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.064641 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.064822 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c58w6\" (UniqueName: \"kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.064937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.065018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.065160 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.065242 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.082814 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.084205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.102753 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166727 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166872 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166890 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166906 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c58w6\" (UniqueName: \"kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8rc\" (UniqueName: \"kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.166984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.167026 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.167663 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.167947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.168255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.168551 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.169469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.183599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c58w6\" (UniqueName: \"kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6\") pod \"dnsmasq-dns-5c79d794d7-2sq8w\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268290 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268530 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8rc\" (UniqueName: \"kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.268698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.269787 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.269836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.270127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.270164 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.270205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.290018 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8rc\" (UniqueName: \"kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc\") pod \"dnsmasq-dns-5f59b8f679-j2zjh\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.333756 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.399599 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.600572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.649951 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.685702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.688105 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-cvnzp"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.689161 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cvnzp"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.689248 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774198 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774270 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c58w6\" (UniqueName: \"kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774334 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config\") pod \"01471021-a895-40b1-a1d9-2714289a2a1e\" (UID: \"01471021-a895-40b1-a1d9-2714289a2a1e\") " Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.774764 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cwkbl"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.775897 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.776200 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.777032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.777373 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.777435 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.777576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config" (OuterVolumeSpecName: "config") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.784464 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4882-account-create-qhczl"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.785602 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.789065 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6" (OuterVolumeSpecName: "kube-api-access-c58w6") pod "01471021-a895-40b1-a1d9-2714289a2a1e" (UID: "01471021-a895-40b1-a1d9-2714289a2a1e"). InnerVolumeSpecName "kube-api-access-c58w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.789476 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.796491 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4882-account-create-qhczl"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.810711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cwkbl"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875664 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chdtf\" (UniqueName: \"kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67b9\" (UniqueName: \"kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875765 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ww6\" (UniqueName: \"kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875927 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c58w6\" (UniqueName: \"kubernetes.io/projected/01471021-a895-40b1-a1d9-2714289a2a1e-kube-api-access-c58w6\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875937 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875946 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875954 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875963 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.875971 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/01471021-a895-40b1-a1d9-2714289a2a1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.882506 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-42b4-account-create-h4mjj"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.913486 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-42b4-account-create-h4mjj"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.913588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.925415 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981703 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ww6\" (UniqueName: \"kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981727 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chdtf\" (UniqueName: \"kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981786 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67b9\" (UniqueName: \"kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.981819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.982710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.983080 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.984396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:10 crc kubenswrapper[4751]: I1123 04:11:10.997318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67b9\" (UniqueName: \"kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9\") pod \"barbican-4882-account-create-qhczl\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.000747 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ww6\" (UniqueName: \"kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6\") pod \"barbican-db-create-cwkbl\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.001597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chdtf\" (UniqueName: \"kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf\") pod \"cinder-db-create-cvnzp\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.026732 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.030317 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2pg58"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.031321 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.035613 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vxbn5" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.035707 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.036439 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.038861 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pg58"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.040838 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.083652 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.083733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmh8\" (UniqueName: \"kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.144593 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.154029 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.167125 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-h5dhh"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.168211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.173148 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b14b-account-create-cpb55"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.174158 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.175532 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185227 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185181 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h5dhh"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbc9\" (UniqueName: \"kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185479 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmh8\" (UniqueName: \"kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.185547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.186452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.199976 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b14b-account-create-cpb55"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.223837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmh8\" (UniqueName: \"kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8\") pod \"cinder-42b4-account-create-h4mjj\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.244137 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287044 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287132 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprzc\" (UniqueName: \"kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wbc9\" (UniqueName: \"kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287265 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.287283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhpz\" (UniqueName: \"kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.303516 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.309861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wbc9\" (UniqueName: \"kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.316097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle\") pod \"keystone-db-sync-2pg58\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.356218 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.389698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.389744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhpz\" (UniqueName: \"kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.389764 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.389827 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprzc\" (UniqueName: \"kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.390955 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.390974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.412950 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprzc\" (UniqueName: \"kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc\") pod \"neutron-b14b-account-create-cpb55\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.414487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhpz\" (UniqueName: \"kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz\") pod \"neutron-db-create-h5dhh\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.431327 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.438197 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.606682 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-cvnzp"] Nov 23 04:11:11 crc kubenswrapper[4751]: W1123 04:11:11.613658 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1953dab_c7b4_46bb_86a8_f83e7db63538.slice/crio-b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad WatchSource:0}: Error finding container b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad: Status 404 returned error can't find the container with id b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.676691 4751 generic.go:334] "Generic (PLEG): container finished" podID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerID="f6a62e6961c9fd0271349bb9b75edd15434f701db8b3512478550a53b7215faf" exitCode=0 Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.676760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" event={"ID":"da36c954-d8e7-425d-be39-822bfc9ed7cd","Type":"ContainerDied","Data":"f6a62e6961c9fd0271349bb9b75edd15434f701db8b3512478550a53b7215faf"} Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.676787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" event={"ID":"da36c954-d8e7-425d-be39-822bfc9ed7cd","Type":"ContainerStarted","Data":"66d40f17e4ab81a8cd100d034857a49e95ac636c0a6168e8f8f7c9cceb45ed8a"} Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.678228 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-2sq8w" Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.682157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cvnzp" event={"ID":"c1953dab-c7b4-46bb-86a8-f83e7db63538","Type":"ContainerStarted","Data":"b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad"} Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.812295 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-2sq8w"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.842410 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-2sq8w"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.876686 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2pg58"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.950456 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cwkbl"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.955230 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-42b4-account-create-h4mjj"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.961364 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4882-account-create-qhczl"] Nov 23 04:11:11 crc kubenswrapper[4751]: I1123 04:11:11.970047 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h5dhh"] Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.085240 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b14b-account-create-cpb55"] Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.659595 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01471021-a895-40b1-a1d9-2714289a2a1e" path="/var/lib/kubelet/pods/01471021-a895-40b1-a1d9-2714289a2a1e/volumes" Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.688586 4751 generic.go:334] "Generic (PLEG): container finished" podID="aabbcf91-7bbc-41e0-9282-d89d88fa89b7" containerID="747a6f7ad6c49c2a9483351c62c541657f001f84148c3b8686467abef313fdcc" exitCode=0 Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.688672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cwkbl" event={"ID":"aabbcf91-7bbc-41e0-9282-d89d88fa89b7","Type":"ContainerDied","Data":"747a6f7ad6c49c2a9483351c62c541657f001f84148c3b8686467abef313fdcc"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.688706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cwkbl" event={"ID":"aabbcf91-7bbc-41e0-9282-d89d88fa89b7","Type":"ContainerStarted","Data":"74f88b736fc1a3def7702083968c6d2ee14def3ac29eccac63f0524abda71cdd"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.691014 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" event={"ID":"da36c954-d8e7-425d-be39-822bfc9ed7cd","Type":"ContainerStarted","Data":"a9c9532667578a1611aafa6fa827a1ba2915b682379869554cf4f542563fa52d"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.691170 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.693581 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e0b797e-86af-4db9-b133-f12052f4a258" containerID="13581f3e4c9936555d4b51bb66c25d50f604ff438025d32fbf60369ea076baf4" exitCode=0 Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.693653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-42b4-account-create-h4mjj" event={"ID":"2e0b797e-86af-4db9-b133-f12052f4a258","Type":"ContainerDied","Data":"13581f3e4c9936555d4b51bb66c25d50f604ff438025d32fbf60369ea076baf4"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.693680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-42b4-account-create-h4mjj" event={"ID":"2e0b797e-86af-4db9-b133-f12052f4a258","Type":"ContainerStarted","Data":"fb67941fbec845ead580bc2fc758cab32d8001ce25a2b01110bd57cc8c86e957"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.695465 4751 generic.go:334] "Generic (PLEG): container finished" podID="c1953dab-c7b4-46bb-86a8-f83e7db63538" containerID="64a9823b055807457ce1b06e0a43e2cd6581d11f4b8c293d31d1af81a8b334a3" exitCode=0 Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.695548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cvnzp" event={"ID":"c1953dab-c7b4-46bb-86a8-f83e7db63538","Type":"ContainerDied","Data":"64a9823b055807457ce1b06e0a43e2cd6581d11f4b8c293d31d1af81a8b334a3"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.696956 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b14b-account-create-cpb55" event={"ID":"e5cc4054-b04a-4b60-be8b-8411ded0d63a","Type":"ContainerStarted","Data":"facab3d436fcaf27193843c3c3d9dd3d81eeb6b6b11e2249833cc680306bdf79"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.696983 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b14b-account-create-cpb55" event={"ID":"e5cc4054-b04a-4b60-be8b-8411ded0d63a","Type":"ContainerStarted","Data":"891e7faa1e85d3e5100412613a90f348added2ed1f2a96ea33582c9fa9302c57"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.698757 4751 generic.go:334] "Generic (PLEG): container finished" podID="99795e69-f0fd-4764-94d1-45148eaed6f7" containerID="61198fb5ac0ba5fc7016d0803f024a6a72654918a96a2e383a831930926cd44b" exitCode=0 Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.698892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4882-account-create-qhczl" event={"ID":"99795e69-f0fd-4764-94d1-45148eaed6f7","Type":"ContainerDied","Data":"61198fb5ac0ba5fc7016d0803f024a6a72654918a96a2e383a831930926cd44b"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.698935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4882-account-create-qhczl" event={"ID":"99795e69-f0fd-4764-94d1-45148eaed6f7","Type":"ContainerStarted","Data":"5abda6c6c9a4b7b1f742af0ee45a95b753f2ed2fc03883961c016d3e51f6a8ee"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.708328 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pg58" event={"ID":"30448e27-081b-401d-abe4-b454365c5831","Type":"ContainerStarted","Data":"1c771371e1c53d20e8e182025d2e3108775d09bb93c4ff82ec95924b6d640e14"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.710190 4751 generic.go:334] "Generic (PLEG): container finished" podID="a310f02b-1e9b-44c2-ac1e-39737a0123d7" containerID="137bec19a10de6236ac0e4d2aaf3f6566d49f0dd64e545721aa3b4bfe6229f9e" exitCode=0 Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.710224 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h5dhh" event={"ID":"a310f02b-1e9b-44c2-ac1e-39737a0123d7","Type":"ContainerDied","Data":"137bec19a10de6236ac0e4d2aaf3f6566d49f0dd64e545721aa3b4bfe6229f9e"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.710260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h5dhh" event={"ID":"a310f02b-1e9b-44c2-ac1e-39737a0123d7","Type":"ContainerStarted","Data":"4dfb564b1e00b08aeafc5ae79480ef280a5f408d1afd0edbc9e9e1e02277fc87"} Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.738900 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b14b-account-create-cpb55" podStartSLOduration=1.738876756 podStartE2EDuration="1.738876756s" podCreationTimestamp="2025-11-23 04:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:12.737618733 +0000 UTC m=+968.931290092" watchObservedRunningTime="2025-11-23 04:11:12.738876756 +0000 UTC m=+968.932548115" Nov 23 04:11:12 crc kubenswrapper[4751]: I1123 04:11:12.756220 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podStartSLOduration=2.75620627 podStartE2EDuration="2.75620627s" podCreationTimestamp="2025-11-23 04:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:12.753872289 +0000 UTC m=+968.947543648" watchObservedRunningTime="2025-11-23 04:11:12.75620627 +0000 UTC m=+968.949877629" Nov 23 04:11:13 crc kubenswrapper[4751]: I1123 04:11:13.721078 4751 generic.go:334] "Generic (PLEG): container finished" podID="e5cc4054-b04a-4b60-be8b-8411ded0d63a" containerID="facab3d436fcaf27193843c3c3d9dd3d81eeb6b6b11e2249833cc680306bdf79" exitCode=0 Nov 23 04:11:13 crc kubenswrapper[4751]: I1123 04:11:13.721207 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b14b-account-create-cpb55" event={"ID":"e5cc4054-b04a-4b60-be8b-8411ded0d63a","Type":"ContainerDied","Data":"facab3d436fcaf27193843c3c3d9dd3d81eeb6b6b11e2249833cc680306bdf79"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.698047 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.749810 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.772699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4882-account-create-qhczl" event={"ID":"99795e69-f0fd-4764-94d1-45148eaed6f7","Type":"ContainerDied","Data":"5abda6c6c9a4b7b1f742af0ee45a95b753f2ed2fc03883961c016d3e51f6a8ee"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.772740 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5abda6c6c9a4b7b1f742af0ee45a95b753f2ed2fc03883961c016d3e51f6a8ee" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.772798 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4882-account-create-qhczl" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.776031 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h5dhh" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.776099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h5dhh" event={"ID":"a310f02b-1e9b-44c2-ac1e-39737a0123d7","Type":"ContainerDied","Data":"4dfb564b1e00b08aeafc5ae79480ef280a5f408d1afd0edbc9e9e1e02277fc87"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.776140 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dfb564b1e00b08aeafc5ae79480ef280a5f408d1afd0edbc9e9e1e02277fc87" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.780114 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.783308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cwkbl" event={"ID":"aabbcf91-7bbc-41e0-9282-d89d88fa89b7","Type":"ContainerDied","Data":"74f88b736fc1a3def7702083968c6d2ee14def3ac29eccac63f0524abda71cdd"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.783388 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74f88b736fc1a3def7702083968c6d2ee14def3ac29eccac63f0524abda71cdd" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.784842 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.785926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-42b4-account-create-h4mjj" event={"ID":"2e0b797e-86af-4db9-b133-f12052f4a258","Type":"ContainerDied","Data":"fb67941fbec845ead580bc2fc758cab32d8001ce25a2b01110bd57cc8c86e957"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.785978 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb67941fbec845ead580bc2fc758cab32d8001ce25a2b01110bd57cc8c86e957" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.787602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-cvnzp" event={"ID":"c1953dab-c7b4-46bb-86a8-f83e7db63538","Type":"ContainerDied","Data":"b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.787624 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b595952d30d49e512ce2d7f29263231091370930abfa8568773a1cc4b26025ad" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.787655 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-cvnzp" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.789833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b14b-account-create-cpb55" event={"ID":"e5cc4054-b04a-4b60-be8b-8411ded0d63a","Type":"ContainerDied","Data":"891e7faa1e85d3e5100412613a90f348added2ed1f2a96ea33582c9fa9302c57"} Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.789885 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="891e7faa1e85d3e5100412613a90f348added2ed1f2a96ea33582c9fa9302c57" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.800641 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820386 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts\") pod \"c1953dab-c7b4-46bb-86a8-f83e7db63538\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820479 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2ww6\" (UniqueName: \"kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6\") pod \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820566 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmhpz\" (UniqueName: \"kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz\") pod \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n67b9\" (UniqueName: \"kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9\") pod \"99795e69-f0fd-4764-94d1-45148eaed6f7\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts\") pod \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\" (UID: \"aabbcf91-7bbc-41e0-9282-d89d88fa89b7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkmh8\" (UniqueName: \"kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8\") pod \"2e0b797e-86af-4db9-b133-f12052f4a258\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts\") pod \"2e0b797e-86af-4db9-b133-f12052f4a258\" (UID: \"2e0b797e-86af-4db9-b133-f12052f4a258\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820817 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chdtf\" (UniqueName: \"kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf\") pod \"c1953dab-c7b4-46bb-86a8-f83e7db63538\" (UID: \"c1953dab-c7b4-46bb-86a8-f83e7db63538\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts\") pod \"99795e69-f0fd-4764-94d1-45148eaed6f7\" (UID: \"99795e69-f0fd-4764-94d1-45148eaed6f7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.820916 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts\") pod \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\" (UID: \"a310f02b-1e9b-44c2-ac1e-39737a0123d7\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.821394 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1953dab-c7b4-46bb-86a8-f83e7db63538" (UID: "c1953dab-c7b4-46bb-86a8-f83e7db63538"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.821908 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a310f02b-1e9b-44c2-ac1e-39737a0123d7" (UID: "a310f02b-1e9b-44c2-ac1e-39737a0123d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.822486 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99795e69-f0fd-4764-94d1-45148eaed6f7" (UID: "99795e69-f0fd-4764-94d1-45148eaed6f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.822701 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aabbcf91-7bbc-41e0-9282-d89d88fa89b7" (UID: "aabbcf91-7bbc-41e0-9282-d89d88fa89b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.822803 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e0b797e-86af-4db9-b133-f12052f4a258" (UID: "2e0b797e-86af-4db9-b133-f12052f4a258"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.827630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9" (OuterVolumeSpecName: "kube-api-access-n67b9") pod "99795e69-f0fd-4764-94d1-45148eaed6f7" (UID: "99795e69-f0fd-4764-94d1-45148eaed6f7"). InnerVolumeSpecName "kube-api-access-n67b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.827783 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6" (OuterVolumeSpecName: "kube-api-access-j2ww6") pod "aabbcf91-7bbc-41e0-9282-d89d88fa89b7" (UID: "aabbcf91-7bbc-41e0-9282-d89d88fa89b7"). InnerVolumeSpecName "kube-api-access-j2ww6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.828717 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8" (OuterVolumeSpecName: "kube-api-access-zkmh8") pod "2e0b797e-86af-4db9-b133-f12052f4a258" (UID: "2e0b797e-86af-4db9-b133-f12052f4a258"). InnerVolumeSpecName "kube-api-access-zkmh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.829326 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz" (OuterVolumeSpecName: "kube-api-access-wmhpz") pod "a310f02b-1e9b-44c2-ac1e-39737a0123d7" (UID: "a310f02b-1e9b-44c2-ac1e-39737a0123d7"). InnerVolumeSpecName "kube-api-access-wmhpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.829386 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.831114 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf" (OuterVolumeSpecName: "kube-api-access-chdtf") pod "c1953dab-c7b4-46bb-86a8-f83e7db63538" (UID: "c1953dab-c7b4-46bb-86a8-f83e7db63538"). InnerVolumeSpecName "kube-api-access-chdtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922161 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts\") pod \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922261 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dprzc\" (UniqueName: \"kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc\") pod \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\" (UID: \"e5cc4054-b04a-4b60-be8b-8411ded0d63a\") " Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5cc4054-b04a-4b60-be8b-8411ded0d63a" (UID: "e5cc4054-b04a-4b60-be8b-8411ded0d63a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922834 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmhpz\" (UniqueName: \"kubernetes.io/projected/a310f02b-1e9b-44c2-ac1e-39737a0123d7-kube-api-access-wmhpz\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922849 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5cc4054-b04a-4b60-be8b-8411ded0d63a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922860 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n67b9\" (UniqueName: \"kubernetes.io/projected/99795e69-f0fd-4764-94d1-45148eaed6f7-kube-api-access-n67b9\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922869 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922877 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkmh8\" (UniqueName: \"kubernetes.io/projected/2e0b797e-86af-4db9-b133-f12052f4a258-kube-api-access-zkmh8\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922885 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0b797e-86af-4db9-b133-f12052f4a258-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922895 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chdtf\" (UniqueName: \"kubernetes.io/projected/c1953dab-c7b4-46bb-86a8-f83e7db63538-kube-api-access-chdtf\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922903 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99795e69-f0fd-4764-94d1-45148eaed6f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922913 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a310f02b-1e9b-44c2-ac1e-39737a0123d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922920 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1953dab-c7b4-46bb-86a8-f83e7db63538-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.922928 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2ww6\" (UniqueName: \"kubernetes.io/projected/aabbcf91-7bbc-41e0-9282-d89d88fa89b7-kube-api-access-j2ww6\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:16 crc kubenswrapper[4751]: I1123 04:11:16.926640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc" (OuterVolumeSpecName: "kube-api-access-dprzc") pod "e5cc4054-b04a-4b60-be8b-8411ded0d63a" (UID: "e5cc4054-b04a-4b60-be8b-8411ded0d63a"). InnerVolumeSpecName "kube-api-access-dprzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.024650 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dprzc\" (UniqueName: \"kubernetes.io/projected/e5cc4054-b04a-4b60-be8b-8411ded0d63a-kube-api-access-dprzc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.808653 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-42b4-account-create-h4mjj" Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.808692 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pg58" event={"ID":"30448e27-081b-401d-abe4-b454365c5831","Type":"ContainerStarted","Data":"5a1fe356fe3d28a8621c54b1ab73ba25e8b4f353169c54b74fee6f922bd97365"} Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.808751 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b14b-account-create-cpb55" Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.808751 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cwkbl" Nov 23 04:11:17 crc kubenswrapper[4751]: I1123 04:11:17.843159 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2pg58" podStartSLOduration=2.05838952 podStartE2EDuration="6.843133925s" podCreationTimestamp="2025-11-23 04:11:11 +0000 UTC" firstStartedPulling="2025-11-23 04:11:11.830492759 +0000 UTC m=+968.024164118" lastFinishedPulling="2025-11-23 04:11:16.615237164 +0000 UTC m=+972.808908523" observedRunningTime="2025-11-23 04:11:17.836447448 +0000 UTC m=+974.030118817" watchObservedRunningTime="2025-11-23 04:11:17.843133925 +0000 UTC m=+974.036805304" Nov 23 04:11:19 crc kubenswrapper[4751]: I1123 04:11:19.832336 4751 generic.go:334] "Generic (PLEG): container finished" podID="30448e27-081b-401d-abe4-b454365c5831" containerID="5a1fe356fe3d28a8621c54b1ab73ba25e8b4f353169c54b74fee6f922bd97365" exitCode=0 Nov 23 04:11:19 crc kubenswrapper[4751]: I1123 04:11:19.832451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pg58" event={"ID":"30448e27-081b-401d-abe4-b454365c5831","Type":"ContainerDied","Data":"5a1fe356fe3d28a8621c54b1ab73ba25e8b4f353169c54b74fee6f922bd97365"} Nov 23 04:11:20 crc kubenswrapper[4751]: I1123 04:11:20.401703 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:20 crc kubenswrapper[4751]: I1123 04:11:20.503458 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:11:20 crc kubenswrapper[4751]: I1123 04:11:20.503740 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="dnsmasq-dns" containerID="cri-o://8a429cc614e5605e1cbb68c8f4a5da44c508d07227240f7e2855ffb0a588d945" gracePeriod=10 Nov 23 04:11:20 crc kubenswrapper[4751]: I1123 04:11:20.846565 4751 generic.go:334] "Generic (PLEG): container finished" podID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerID="8a429cc614e5605e1cbb68c8f4a5da44c508d07227240f7e2855ffb0a588d945" exitCode=0 Nov 23 04:11:20 crc kubenswrapper[4751]: I1123 04:11:20.846623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerDied","Data":"8a429cc614e5605e1cbb68c8f4a5da44c508d07227240f7e2855ffb0a588d945"} Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.001991 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.090281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x4pl\" (UniqueName: \"kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl\") pod \"8854c4a0-71d3-4af3-9470-eee896ccb80c\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.090378 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config\") pod \"8854c4a0-71d3-4af3-9470-eee896ccb80c\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.090471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb\") pod \"8854c4a0-71d3-4af3-9470-eee896ccb80c\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.090498 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb\") pod \"8854c4a0-71d3-4af3-9470-eee896ccb80c\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.090522 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc\") pod \"8854c4a0-71d3-4af3-9470-eee896ccb80c\" (UID: \"8854c4a0-71d3-4af3-9470-eee896ccb80c\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.097874 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl" (OuterVolumeSpecName: "kube-api-access-2x4pl") pod "8854c4a0-71d3-4af3-9470-eee896ccb80c" (UID: "8854c4a0-71d3-4af3-9470-eee896ccb80c"). InnerVolumeSpecName "kube-api-access-2x4pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.137025 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8854c4a0-71d3-4af3-9470-eee896ccb80c" (UID: "8854c4a0-71d3-4af3-9470-eee896ccb80c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.139189 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8854c4a0-71d3-4af3-9470-eee896ccb80c" (UID: "8854c4a0-71d3-4af3-9470-eee896ccb80c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.141164 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8854c4a0-71d3-4af3-9470-eee896ccb80c" (UID: "8854c4a0-71d3-4af3-9470-eee896ccb80c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.146256 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config" (OuterVolumeSpecName: "config") pod "8854c4a0-71d3-4af3-9470-eee896ccb80c" (UID: "8854c4a0-71d3-4af3-9470-eee896ccb80c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.183642 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle\") pod \"30448e27-081b-401d-abe4-b454365c5831\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192205 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wbc9\" (UniqueName: \"kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9\") pod \"30448e27-081b-401d-abe4-b454365c5831\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data\") pod \"30448e27-081b-401d-abe4-b454365c5831\" (UID: \"30448e27-081b-401d-abe4-b454365c5831\") " Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192699 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192734 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192747 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192762 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x4pl\" (UniqueName: \"kubernetes.io/projected/8854c4a0-71d3-4af3-9470-eee896ccb80c-kube-api-access-2x4pl\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.192776 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8854c4a0-71d3-4af3-9470-eee896ccb80c-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.198441 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9" (OuterVolumeSpecName: "kube-api-access-6wbc9") pod "30448e27-081b-401d-abe4-b454365c5831" (UID: "30448e27-081b-401d-abe4-b454365c5831"). InnerVolumeSpecName "kube-api-access-6wbc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.214575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30448e27-081b-401d-abe4-b454365c5831" (UID: "30448e27-081b-401d-abe4-b454365c5831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.245651 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data" (OuterVolumeSpecName: "config-data") pod "30448e27-081b-401d-abe4-b454365c5831" (UID: "30448e27-081b-401d-abe4-b454365c5831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.293855 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.293883 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wbc9\" (UniqueName: \"kubernetes.io/projected/30448e27-081b-401d-abe4-b454365c5831-kube-api-access-6wbc9\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.293893 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30448e27-081b-401d-abe4-b454365c5831-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.860587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2pg58" event={"ID":"30448e27-081b-401d-abe4-b454365c5831","Type":"ContainerDied","Data":"1c771371e1c53d20e8e182025d2e3108775d09bb93c4ff82ec95924b6d640e14"} Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.861432 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c771371e1c53d20e8e182025d2e3108775d09bb93c4ff82ec95924b6d640e14" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.860665 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2pg58" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.863618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" event={"ID":"8854c4a0-71d3-4af3-9470-eee896ccb80c","Type":"ContainerDied","Data":"e7ff296d83793088691e4081a60ae51d12d6ae2a20984e6183cb3d1b0cd81446"} Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.863707 4751 scope.go:117] "RemoveContainer" containerID="8a429cc614e5605e1cbb68c8f4a5da44c508d07227240f7e2855ffb0a588d945" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.863890 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ktpgz" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.914106 4751 scope.go:117] "RemoveContainer" containerID="016a420501c8d57a324a1fe0cff7340dd60731d9f9417de653c37c79e8ad43a7" Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.945625 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:11:21 crc kubenswrapper[4751]: I1123 04:11:21.946449 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ktpgz"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209411 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nttr5"] Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209749 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99795e69-f0fd-4764-94d1-45148eaed6f7" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209765 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795e69-f0fd-4764-94d1-45148eaed6f7" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209776 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1953dab-c7b4-46bb-86a8-f83e7db63538" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209782 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1953dab-c7b4-46bb-86a8-f83e7db63538" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209801 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="init" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209812 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="init" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209832 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabbcf91-7bbc-41e0-9282-d89d88fa89b7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209840 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabbcf91-7bbc-41e0-9282-d89d88fa89b7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209856 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30448e27-081b-401d-abe4-b454365c5831" containerName="keystone-db-sync" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209862 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="30448e27-081b-401d-abe4-b454365c5831" containerName="keystone-db-sync" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209874 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="dnsmasq-dns" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209879 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="dnsmasq-dns" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209887 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cc4054-b04a-4b60-be8b-8411ded0d63a" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209893 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cc4054-b04a-4b60-be8b-8411ded0d63a" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209907 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0b797e-86af-4db9-b133-f12052f4a258" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209912 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0b797e-86af-4db9-b133-f12052f4a258" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: E1123 04:11:22.209924 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a310f02b-1e9b-44c2-ac1e-39737a0123d7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.209930 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a310f02b-1e9b-44c2-ac1e-39737a0123d7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210089 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0b797e-86af-4db9-b133-f12052f4a258" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210104 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabbcf91-7bbc-41e0-9282-d89d88fa89b7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210112 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1953dab-c7b4-46bb-86a8-f83e7db63538" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210121 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" containerName="dnsmasq-dns" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210132 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cc4054-b04a-4b60-be8b-8411ded0d63a" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210139 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="99795e69-f0fd-4764-94d1-45148eaed6f7" containerName="mariadb-account-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210150 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a310f02b-1e9b-44c2-ac1e-39737a0123d7" containerName="mariadb-database-create" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210162 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="30448e27-081b-401d-abe4-b454365c5831" containerName="keystone-db-sync" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.210677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.213365 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.213594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.213643 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.213872 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.225194 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.227633 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.228733 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vxbn5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.237871 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nttr5"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.259559 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339271 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whh29\" (UniqueName: \"kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339328 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339424 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339479 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339506 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.339608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjsx\" (UniqueName: \"kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.392215 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.393515 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.394991 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-4bdp5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.395653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.397658 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.397828 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.414005 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.439572 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-slzk9"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjsx\" (UniqueName: \"kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whh29\" (UniqueName: \"kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440565 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8s8p\" (UniqueName: \"kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440610 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440682 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440691 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440794 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440816 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.440838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.442941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.443646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.444934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.444988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.445542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.448765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.450060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.455110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.455328 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nr5bj" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.455463 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.455714 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.461029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.466982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.469462 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jkfd7"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.470646 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.475028 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.475214 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xtrvh" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.475317 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.481967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whh29\" (UniqueName: \"kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29\") pod \"keystone-bootstrap-nttr5\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.482035 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-slzk9"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.482096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjsx\" (UniqueName: \"kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx\") pod \"dnsmasq-dns-bbf5cc879-lzfsl\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.494909 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jkfd7"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.535949 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.537940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.539156 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.540798 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541754 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7k8\" (UniqueName: \"kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8s8p\" (UniqueName: \"kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541883 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9dw\" (UniqueName: \"kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.541983 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.542001 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.542017 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.542037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.542056 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.543740 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.543984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.544207 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.545694 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.557042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.558636 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.574945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8s8p\" (UniqueName: \"kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p\") pod \"horizon-7595df9dd7-gj6cm\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.601178 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.618953 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-q65sd"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.620284 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.622685 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.622869 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s4sk6" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.625327 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q65sd"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.643984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644031 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644074 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9dw\" (UniqueName: \"kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644127 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644198 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644273 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644298 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs7k8\" (UniqueName: \"kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbtcm\" (UniqueName: \"kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644356 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.644427 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.649628 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.663151 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.666792 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.670295 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.683117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.689904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.700692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9dw\" (UniqueName: \"kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw\") pod \"neutron-db-sync-jkfd7\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.701384 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8854c4a0-71d3-4af3-9470-eee896ccb80c" path="/var/lib/kubelet/pods/8854c4a0-71d3-4af3-9470-eee896ccb80c/volumes" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.723062 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.739101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs7k8\" (UniqueName: \"kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8\") pod \"cinder-db-sync-slzk9\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.769611 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.769801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxbw\" (UniqueName: \"kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.769836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.769891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.769979 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.770014 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.770052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.770094 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.770148 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbtcm\" (UniqueName: \"kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.770200 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.779057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.779681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.788838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.803271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.818501 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hhmh2"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.824202 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.825808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.831574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbtcm\" (UniqueName: \"kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm\") pod \"ceilometer-0\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.836296 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.839267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.840818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mh4tl" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.846053 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.860180 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.865579 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.880261 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.894685 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.894856 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxbw\" (UniqueName: \"kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.894987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.895020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.896016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slzk9" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.908070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.916880 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.920229 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.924931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxbw\" (UniqueName: \"kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw\") pod \"barbican-db-sync-q65sd\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.926400 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hhmh2"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.933431 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.940950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.942650 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.943528 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.944240 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hp66g" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.944668 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.945025 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.949910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.951785 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.962383 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.984440 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998560 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqq7n\" (UniqueName: \"kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79lk\" (UniqueName: \"kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998680 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:22 crc kubenswrapper[4751]: I1123 04:11:22.998873 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.000926 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.002924 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.010024 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.010251 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.022410 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100959 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.100983 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101008 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101027 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101048 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101074 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpwk\" (UniqueName: \"kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101126 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101141 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101159 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzkq\" (UniqueName: \"kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqq7n\" (UniqueName: \"kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101248 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x79lk\" (UniqueName: \"kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101312 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101394 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gwd\" (UniqueName: \"kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101501 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.101521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.104789 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.109603 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.110191 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.111487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.117389 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.121885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqq7n\" (UniqueName: \"kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.121980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key\") pod \"horizon-6594b8fbdc-sj2cr\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.122041 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.122256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.139517 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x79lk\" (UniqueName: \"kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk\") pod \"placement-db-sync-hhmh2\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46gwd\" (UniqueName: \"kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204839 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204834 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q65sd" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.204856 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205329 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205494 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205562 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205651 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205668 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpwk\" (UniqueName: \"kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205742 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzkq\" (UniqueName: \"kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.205886 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.206083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.206986 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.207566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.207686 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.208279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.208645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.209018 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.209300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.209572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.215145 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.216464 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.217101 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.218868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.220471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.226315 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmh2" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.229395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpwk\" (UniqueName: \"kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk\") pod \"dnsmasq-dns-56df8fb6b7-tbgwf\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.230885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.233146 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.237585 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46gwd\" (UniqueName: \"kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.237845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzkq\" (UniqueName: \"kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.237962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.259696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.269993 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nttr5"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.277814 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.289752 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.294463 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.311844 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.326418 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.337845 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.512550 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jkfd7"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.528470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:23 crc kubenswrapper[4751]: W1123 04:11:23.551436 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4181f6c_4f0a_41fb_af82_f7f10f85c117.slice/crio-bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0 WatchSource:0}: Error finding container bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0: Status 404 returned error can't find the container with id bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0 Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.558165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:11:23 crc kubenswrapper[4751]: W1123 04:11:23.558499 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017 WatchSource:0}: Error finding container aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017: Status 404 returned error can't find the container with id aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017 Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.757332 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-slzk9"] Nov 23 04:11:23 crc kubenswrapper[4751]: W1123 04:11:23.759933 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d WatchSource:0}: Error finding container aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d: Status 404 returned error can't find the container with id aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.899761 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q65sd"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.915223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerStarted","Data":"aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.917582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nttr5" event={"ID":"97b6c290-253b-4762-b9a7-1807ea27f96f","Type":"ContainerStarted","Data":"1dee67d97dbfdf8194c1fd9a53f6643b7a9747052f94be138bf4f11bf799e1e9"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.917619 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nttr5" event={"ID":"97b6c290-253b-4762-b9a7-1807ea27f96f","Type":"ContainerStarted","Data":"9d5b4a9692795043126ebf7ad5b085ea2913f29a24d53894da999ae48df53ffc"} Nov 23 04:11:23 crc kubenswrapper[4751]: W1123 04:11:23.918433 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ffd47c6_ed23_4bc3_be63_dd817807dc3e.slice/crio-1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9 WatchSource:0}: Error finding container 1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9: Status 404 returned error can't find the container with id 1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9 Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.925880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hhmh2"] Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.929910 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jkfd7" event={"ID":"d4181f6c-4f0a-41fb-af82-f7f10f85c117","Type":"ContainerStarted","Data":"bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.939448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slzk9" event={"ID":"37ead28c-46bc-4415-a35c-1d3d8de722dd","Type":"ContainerStarted","Data":"aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.940958 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nttr5" podStartSLOduration=1.940937483 podStartE2EDuration="1.940937483s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:23.934322098 +0000 UTC m=+980.127993487" watchObservedRunningTime="2025-11-23 04:11:23.940937483 +0000 UTC m=+980.134608842" Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.972147 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee08beab-370b-40ca-8ad5-10f8d86ffc53" containerID="4114766ba00cf70ec4429638818d4bca39ee2b4242f85c1023326a1d20307e35" exitCode=0 Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.972421 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" event={"ID":"ee08beab-370b-40ca-8ad5-10f8d86ffc53","Type":"ContainerDied","Data":"4114766ba00cf70ec4429638818d4bca39ee2b4242f85c1023326a1d20307e35"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.972464 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" event={"ID":"ee08beab-370b-40ca-8ad5-10f8d86ffc53","Type":"ContainerStarted","Data":"c40bbfc30e316e205ca5019bd82a03856209ab77c0b246d7c42dbe859ae01e1b"} Nov 23 04:11:23 crc kubenswrapper[4751]: I1123 04:11:23.986483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595df9dd7-gj6cm" event={"ID":"66bd4f0d-d90e-4991-adb5-2844154b5de2","Type":"ContainerStarted","Data":"1e058a2bb08661c028236032fdedaca6ac3031fff221ddd2159387588e4a7afc"} Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.007413 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.032316 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.103632 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:24 crc kubenswrapper[4751]: W1123 04:11:24.111144 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d654bf_e6c3_4579_b509_c5f49d9d9da9.slice/crio-5d6cdc6366ccd7b6e6c8fa7b0c90193596dea0b41aa77c940882f16e610e04d6 WatchSource:0}: Error finding container 5d6cdc6366ccd7b6e6c8fa7b0c90193596dea0b41aa77c940882f16e610e04d6: Status 404 returned error can't find the container with id 5d6cdc6366ccd7b6e6c8fa7b0c90193596dea0b41aa77c940882f16e610e04d6 Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.219213 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:24 crc kubenswrapper[4751]: W1123 04:11:24.277560 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35d1922e_4ae6_48d2_bfc4_d5acee2909dc.slice/crio-bfed6211eb7e059045fbf61a494a671c7a6b46f4e5721d882e9bd9679937b89b WatchSource:0}: Error finding container bfed6211eb7e059045fbf61a494a671c7a6b46f4e5721d882e9bd9679937b89b: Status 404 returned error can't find the container with id bfed6211eb7e059045fbf61a494a671c7a6b46f4e5721d882e9bd9679937b89b Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.292008 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.332859 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.332912 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.332948 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.333012 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.333062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwjsx\" (UniqueName: \"kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.333101 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0\") pod \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\" (UID: \"ee08beab-370b-40ca-8ad5-10f8d86ffc53\") " Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.359260 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx" (OuterVolumeSpecName: "kube-api-access-kwjsx") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "kube-api-access-kwjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.360815 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.373950 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.383031 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config" (OuterVolumeSpecName: "config") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.389327 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.390809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee08beab-370b-40ca-8ad5-10f8d86ffc53" (UID: "ee08beab-370b-40ca-8ad5-10f8d86ffc53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437282 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437315 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437326 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437335 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437415 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwjsx\" (UniqueName: \"kubernetes.io/projected/ee08beab-370b-40ca-8ad5-10f8d86ffc53-kube-api-access-kwjsx\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.437429 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee08beab-370b-40ca-8ad5-10f8d86ffc53-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.502574 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.529720 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.571802 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.590390 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:11:24 crc kubenswrapper[4751]: E1123 04:11:24.590745 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee08beab-370b-40ca-8ad5-10f8d86ffc53" containerName="init" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.590756 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee08beab-370b-40ca-8ad5-10f8d86ffc53" containerName="init" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.593204 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee08beab-370b-40ca-8ad5-10f8d86ffc53" containerName="init" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.594262 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.610503 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.616933 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.641660 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.641724 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.641796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.641826 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwk2j\" (UniqueName: \"kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.641857 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.744510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.745181 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwk2j\" (UniqueName: \"kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.750000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.750240 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.750380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.750430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.751527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.751921 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.753958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.765585 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwk2j\" (UniqueName: \"kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j\") pod \"horizon-84c958854f-wnwfq\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:24 crc kubenswrapper[4751]: I1123 04:11:24.914304 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.008685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q65sd" event={"ID":"4ffd47c6-ed23-4bc3-be63-dd817807dc3e","Type":"ContainerStarted","Data":"1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.013571 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerStarted","Data":"5d6cdc6366ccd7b6e6c8fa7b0c90193596dea0b41aa77c940882f16e610e04d6"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.015768 4751 generic.go:334] "Generic (PLEG): container finished" podID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerID="01bcc21bd41c4c3f0f4a64e58c77446c651c4a893096fa98a58e0a82686b2ce7" exitCode=0 Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.015806 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" event={"ID":"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab","Type":"ContainerDied","Data":"01bcc21bd41c4c3f0f4a64e58c77446c651c4a893096fa98a58e0a82686b2ce7"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.015819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" event={"ID":"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab","Type":"ContainerStarted","Data":"d7fab97b623fccad4d291ad3dfedddcb440e48caf37ae5a7ef8d01b2e33f910f"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.021584 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerStarted","Data":"bfed6211eb7e059045fbf61a494a671c7a6b46f4e5721d882e9bd9679937b89b"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.043541 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6594b8fbdc-sj2cr" event={"ID":"3f54d651-41da-46a8-a8c3-27fb1ad249b8","Type":"ContainerStarted","Data":"12e82121806b67272ff595b9d8b0cb59cfd279bdc95e594bcbd1824cd89b8cfb"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.053908 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jkfd7" event={"ID":"d4181f6c-4f0a-41fb-af82-f7f10f85c117","Type":"ContainerStarted","Data":"ea8e5ba87f241184fa7a06b0e02fc44c9e26afc85a9c23c55a4b55105a6208fa"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.067182 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmh2" event={"ID":"ee0ce204-8f54-4c8c-98a6-f18f36873339","Type":"ContainerStarted","Data":"59c58ab702740765b3f4f688188125812509bb3438cd8cf60407128bb82d4c80"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.076314 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jkfd7" podStartSLOduration=3.076297754 podStartE2EDuration="3.076297754s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:25.075445542 +0000 UTC m=+981.269116911" watchObservedRunningTime="2025-11-23 04:11:25.076297754 +0000 UTC m=+981.269969113" Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.080529 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.080709 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-lzfsl" event={"ID":"ee08beab-370b-40ca-8ad5-10f8d86ffc53","Type":"ContainerDied","Data":"c40bbfc30e316e205ca5019bd82a03856209ab77c0b246d7c42dbe859ae01e1b"} Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.080747 4751 scope.go:117] "RemoveContainer" containerID="4114766ba00cf70ec4429638818d4bca39ee2b4242f85c1023326a1d20307e35" Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.193158 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.208125 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-lzfsl"] Nov 23 04:11:25 crc kubenswrapper[4751]: I1123 04:11:25.597029 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:11:25 crc kubenswrapper[4751]: W1123 04:11:25.620196 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f395bca_24d1_4b23_8eb2_782713ee4d9a.slice/crio-0f3f4047ae7c7280921109033cc7d0625d880cfc49c6687aa8a7ba6cb821b9a6 WatchSource:0}: Error finding container 0f3f4047ae7c7280921109033cc7d0625d880cfc49c6687aa8a7ba6cb821b9a6: Status 404 returned error can't find the container with id 0f3f4047ae7c7280921109033cc7d0625d880cfc49c6687aa8a7ba6cb821b9a6 Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.092161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerStarted","Data":"0f3f4047ae7c7280921109033cc7d0625d880cfc49c6687aa8a7ba6cb821b9a6"} Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.109643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerStarted","Data":"34e893a9416bed0be0f2e7894c64c88ed24ca7ec01e4658f0afd266f44ce3142"} Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.120404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerStarted","Data":"363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43"} Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.128082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" event={"ID":"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab","Type":"ContainerStarted","Data":"f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6"} Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.172189 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" podStartSLOduration=4.172171932 podStartE2EDuration="4.172171932s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:26.171588727 +0000 UTC m=+982.365260086" watchObservedRunningTime="2025-11-23 04:11:26.172171932 +0000 UTC m=+982.365843291" Nov 23 04:11:26 crc kubenswrapper[4751]: I1123 04:11:26.660531 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee08beab-370b-40ca-8ad5-10f8d86ffc53" path="/var/lib/kubelet/pods/ee08beab-370b-40ca-8ad5-10f8d86ffc53/volumes" Nov 23 04:11:27 crc kubenswrapper[4751]: I1123 04:11:27.135845 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.209055 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerStarted","Data":"ac2025c8c995213d6bf9670543566c2b921d7ea67d14cd018e56eda7b219e418"} Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.209525 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-log" containerID="cri-o://34e893a9416bed0be0f2e7894c64c88ed24ca7ec01e4658f0afd266f44ce3142" gracePeriod=30 Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.209968 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-httpd" containerID="cri-o://ac2025c8c995213d6bf9670543566c2b921d7ea67d14cd018e56eda7b219e418" gracePeriod=30 Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.216503 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerStarted","Data":"e356a99eefd1c2d6381154c98915792d93df9d9d1cc5f68617e2bd25a3802a88"} Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.216909 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-log" containerID="cri-o://363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43" gracePeriod=30 Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.217042 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-httpd" containerID="cri-o://e356a99eefd1c2d6381154c98915792d93df9d9d1cc5f68617e2bd25a3802a88" gracePeriod=30 Nov 23 04:11:29 crc kubenswrapper[4751]: E1123 04:11:29.256705 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d654bf_e6c3_4579_b509_c5f49d9d9da9.slice/crio-363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43.scope\": RecentStats: unable to find data in memory cache]" Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.260544 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.260519149 podStartE2EDuration="7.260519149s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:29.24843176 +0000 UTC m=+985.442103169" watchObservedRunningTime="2025-11-23 04:11:29.260519149 +0000 UTC m=+985.454190508" Nov 23 04:11:29 crc kubenswrapper[4751]: I1123 04:11:29.287076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.287055621 podStartE2EDuration="7.287055621s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:29.27339098 +0000 UTC m=+985.467062339" watchObservedRunningTime="2025-11-23 04:11:29.287055621 +0000 UTC m=+985.480726980" Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.245232 4751 generic.go:334] "Generic (PLEG): container finished" podID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerID="ac2025c8c995213d6bf9670543566c2b921d7ea67d14cd018e56eda7b219e418" exitCode=0 Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.245526 4751 generic.go:334] "Generic (PLEG): container finished" podID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerID="34e893a9416bed0be0f2e7894c64c88ed24ca7ec01e4658f0afd266f44ce3142" exitCode=143 Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.245515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerDied","Data":"ac2025c8c995213d6bf9670543566c2b921d7ea67d14cd018e56eda7b219e418"} Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.245588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerDied","Data":"34e893a9416bed0be0f2e7894c64c88ed24ca7ec01e4658f0afd266f44ce3142"} Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.256079 4751 generic.go:334] "Generic (PLEG): container finished" podID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerID="e356a99eefd1c2d6381154c98915792d93df9d9d1cc5f68617e2bd25a3802a88" exitCode=0 Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.256120 4751 generic.go:334] "Generic (PLEG): container finished" podID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerID="363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43" exitCode=143 Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.256144 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerDied","Data":"e356a99eefd1c2d6381154c98915792d93df9d9d1cc5f68617e2bd25a3802a88"} Nov 23 04:11:30 crc kubenswrapper[4751]: I1123 04:11:30.256175 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerDied","Data":"363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43"} Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.669963 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.692320 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.694153 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.706467 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.708832 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.757259 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.781863 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-789489d584-slcs8"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.784687 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787181 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787288 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787332 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcmmx\" (UniqueName: \"kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787388 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.787442 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.797358 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789489d584-slcs8"] Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-combined-ca-bundle\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888665 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f1490c-4b27-47c0-bc36-688b467ebe2c-logs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888761 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-tls-certs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-config-data\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888828 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-scripts\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888946 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.888992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcmmx\" (UniqueName: \"kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.889017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-secret-key\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.889041 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkqpx\" (UniqueName: \"kubernetes.io/projected/49f1490c-4b27-47c0-bc36-688b467ebe2c-kube-api-access-bkqpx\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.889270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.889694 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.895878 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.895992 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.896408 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.906202 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.908445 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcmmx\" (UniqueName: \"kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx\") pod \"horizon-bc45b664d-wh6ld\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990707 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-secret-key\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkqpx\" (UniqueName: \"kubernetes.io/projected/49f1490c-4b27-47c0-bc36-688b467ebe2c-kube-api-access-bkqpx\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990796 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-combined-ca-bundle\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990817 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f1490c-4b27-47c0-bc36-688b467ebe2c-logs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990839 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-tls-certs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990875 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-config-data\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.990895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-scripts\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.991916 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-scripts\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.992936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49f1490c-4b27-47c0-bc36-688b467ebe2c-logs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.993277 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49f1490c-4b27-47c0-bc36-688b467ebe2c-config-data\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:31 crc kubenswrapper[4751]: I1123 04:11:31.997904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-tls-certs\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:32 crc kubenswrapper[4751]: I1123 04:11:32.000181 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-combined-ca-bundle\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:32 crc kubenswrapper[4751]: I1123 04:11:32.002607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/49f1490c-4b27-47c0-bc36-688b467ebe2c-horizon-secret-key\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:32 crc kubenswrapper[4751]: I1123 04:11:32.010396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkqpx\" (UniqueName: \"kubernetes.io/projected/49f1490c-4b27-47c0-bc36-688b467ebe2c-kube-api-access-bkqpx\") pod \"horizon-789489d584-slcs8\" (UID: \"49f1490c-4b27-47c0-bc36-688b467ebe2c\") " pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:32 crc kubenswrapper[4751]: I1123 04:11:32.015144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:11:32 crc kubenswrapper[4751]: I1123 04:11:32.101622 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:11:33 crc kubenswrapper[4751]: I1123 04:11:33.305129 4751 generic.go:334] "Generic (PLEG): container finished" podID="97b6c290-253b-4762-b9a7-1807ea27f96f" containerID="1dee67d97dbfdf8194c1fd9a53f6643b7a9747052f94be138bf4f11bf799e1e9" exitCode=0 Nov 23 04:11:33 crc kubenswrapper[4751]: I1123 04:11:33.305182 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nttr5" event={"ID":"97b6c290-253b-4762-b9a7-1807ea27f96f","Type":"ContainerDied","Data":"1dee67d97dbfdf8194c1fd9a53f6643b7a9747052f94be138bf4f11bf799e1e9"} Nov 23 04:11:33 crc kubenswrapper[4751]: I1123 04:11:33.312489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:11:33 crc kubenswrapper[4751]: I1123 04:11:33.408565 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:33 crc kubenswrapper[4751]: I1123 04:11:33.409065 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" containerID="cri-o://a9c9532667578a1611aafa6fa827a1ba2915b682379869554cf4f542563fa52d" gracePeriod=10 Nov 23 04:11:34 crc kubenswrapper[4751]: I1123 04:11:34.322511 4751 generic.go:334] "Generic (PLEG): container finished" podID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerID="a9c9532667578a1611aafa6fa827a1ba2915b682379869554cf4f542563fa52d" exitCode=0 Nov 23 04:11:34 crc kubenswrapper[4751]: I1123 04:11:34.322625 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" event={"ID":"da36c954-d8e7-425d-be39-822bfc9ed7cd","Type":"ContainerDied","Data":"a9c9532667578a1611aafa6fa827a1ba2915b682379869554cf4f542563fa52d"} Nov 23 04:11:35 crc kubenswrapper[4751]: I1123 04:11:35.400865 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 23 04:11:39 crc kubenswrapper[4751]: E1123 04:11:39.919974 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 23 04:11:39 crc kubenswrapper[4751]: E1123 04:11:39.920770 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h89h66h6dhb4h5bh5b6h5b9hbchcbh8hch65ch67dh64dh59h6fh595h55ch547h5b7h64ch694h565hf9hb7h674h5f9h654h58ch5bch68fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8s8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7595df9dd7-gj6cm_openstack(66bd4f0d-d90e-4991-adb5-2844154b5de2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:11:39 crc kubenswrapper[4751]: E1123 04:11:39.923496 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7595df9dd7-gj6cm" podUID="66bd4f0d-d90e-4991-adb5-2844154b5de2" Nov 23 04:11:39 crc kubenswrapper[4751]: I1123 04:11:39.987107 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040564 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whh29\" (UniqueName: \"kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040625 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.040712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys\") pod \"97b6c290-253b-4762-b9a7-1807ea27f96f\" (UID: \"97b6c290-253b-4762-b9a7-1807ea27f96f\") " Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.047614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.063941 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29" (OuterVolumeSpecName: "kube-api-access-whh29") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "kube-api-access-whh29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.064505 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.068942 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts" (OuterVolumeSpecName: "scripts") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.075670 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.075748 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data" (OuterVolumeSpecName: "config-data") pod "97b6c290-253b-4762-b9a7-1807ea27f96f" (UID: "97b6c290-253b-4762-b9a7-1807ea27f96f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142888 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142918 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142928 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whh29\" (UniqueName: \"kubernetes.io/projected/97b6c290-253b-4762-b9a7-1807ea27f96f-kube-api-access-whh29\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142939 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142947 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.142955 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b6c290-253b-4762-b9a7-1807ea27f96f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.381035 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nttr5" event={"ID":"97b6c290-253b-4762-b9a7-1807ea27f96f","Type":"ContainerDied","Data":"9d5b4a9692795043126ebf7ad5b085ea2913f29a24d53894da999ae48df53ffc"} Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.381063 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nttr5" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.381088 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5b4a9692795043126ebf7ad5b085ea2913f29a24d53894da999ae48df53ffc" Nov 23 04:11:40 crc kubenswrapper[4751]: I1123 04:11:40.402848 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.166801 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nttr5"] Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.177145 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nttr5"] Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.269707 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vzfxr"] Nov 23 04:11:41 crc kubenswrapper[4751]: E1123 04:11:41.270111 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b6c290-253b-4762-b9a7-1807ea27f96f" containerName="keystone-bootstrap" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.270133 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b6c290-253b-4762-b9a7-1807ea27f96f" containerName="keystone-bootstrap" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.270369 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b6c290-253b-4762-b9a7-1807ea27f96f" containerName="keystone-bootstrap" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.270963 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.277181 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vzfxr"] Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.279339 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.279653 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.279907 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.280092 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vxbn5" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.280282 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371203 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lc7s\" (UniqueName: \"kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371233 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371382 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.371422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.472855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.472919 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.472949 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.473140 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.473176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lc7s\" (UniqueName: \"kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.473224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.480772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.484848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.494283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.494821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.498919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.500238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lc7s\" (UniqueName: \"kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s\") pod \"keystone-bootstrap-vzfxr\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:41 crc kubenswrapper[4751]: I1123 04:11:41.594864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:11:42 crc kubenswrapper[4751]: I1123 04:11:42.662238 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b6c290-253b-4762-b9a7-1807ea27f96f" path="/var/lib/kubelet/pods/97b6c290-253b-4762-b9a7-1807ea27f96f/volumes" Nov 23 04:11:42 crc kubenswrapper[4751]: E1123 04:11:42.723098 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 23 04:11:42 crc kubenswrapper[4751]: E1123 04:11:42.723459 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n666h9chb5h5bfh5d8h7dh66fh87hbdh594h66h689h5dh7chdbh5c8h54dh5cfh5b4hbdh86h59fh57ch5f9h5bch78h669h74h54fhf4h87h584q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqq7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6594b8fbdc-sj2cr_openstack(3f54d651-41da-46a8-a8c3-27fb1ad249b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:11:42 crc kubenswrapper[4751]: E1123 04:11:42.726210 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6594b8fbdc-sj2cr" podUID="3f54d651-41da-46a8-a8c3-27fb1ad249b8" Nov 23 04:11:43 crc kubenswrapper[4751]: E1123 04:11:43.290593 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 23 04:11:43 crc kubenswrapper[4751]: E1123 04:11:43.290967 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wxbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-q65sd_openstack(4ffd47c6-ed23-4bc3-be63-dd817807dc3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:11:43 crc kubenswrapper[4751]: E1123 04:11:43.292148 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-q65sd" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" Nov 23 04:11:43 crc kubenswrapper[4751]: E1123 04:11:43.409293 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-q65sd" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" Nov 23 04:11:50 crc kubenswrapper[4751]: I1123 04:11:50.401119 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 23 04:11:50 crc kubenswrapper[4751]: I1123 04:11:50.401969 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.669274 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.778982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data\") pod \"66bd4f0d-d90e-4991-adb5-2844154b5de2\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8s8p\" (UniqueName: \"kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p\") pod \"66bd4f0d-d90e-4991-adb5-2844154b5de2\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779131 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs\") pod \"66bd4f0d-d90e-4991-adb5-2844154b5de2\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key\") pod \"66bd4f0d-d90e-4991-adb5-2844154b5de2\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779414 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts\") pod \"66bd4f0d-d90e-4991-adb5-2844154b5de2\" (UID: \"66bd4f0d-d90e-4991-adb5-2844154b5de2\") " Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779662 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs" (OuterVolumeSpecName: "logs") pod "66bd4f0d-d90e-4991-adb5-2844154b5de2" (UID: "66bd4f0d-d90e-4991-adb5-2844154b5de2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779877 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts" (OuterVolumeSpecName: "scripts") pod "66bd4f0d-d90e-4991-adb5-2844154b5de2" (UID: "66bd4f0d-d90e-4991-adb5-2844154b5de2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.779965 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66bd4f0d-d90e-4991-adb5-2844154b5de2-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.780232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data" (OuterVolumeSpecName: "config-data") pod "66bd4f0d-d90e-4991-adb5-2844154b5de2" (UID: "66bd4f0d-d90e-4991-adb5-2844154b5de2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.786703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p" (OuterVolumeSpecName: "kube-api-access-j8s8p") pod "66bd4f0d-d90e-4991-adb5-2844154b5de2" (UID: "66bd4f0d-d90e-4991-adb5-2844154b5de2"). InnerVolumeSpecName "kube-api-access-j8s8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.787277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "66bd4f0d-d90e-4991-adb5-2844154b5de2" (UID: "66bd4f0d-d90e-4991-adb5-2844154b5de2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.881563 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8s8p\" (UniqueName: \"kubernetes.io/projected/66bd4f0d-d90e-4991-adb5-2844154b5de2-kube-api-access-j8s8p\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.881603 4751 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/66bd4f0d-d90e-4991-adb5-2844154b5de2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.881615 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:51 crc kubenswrapper[4751]: I1123 04:11:51.881628 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66bd4f0d-d90e-4991-adb5-2844154b5de2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:51 crc kubenswrapper[4751]: E1123 04:11:51.980990 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 23 04:11:51 crc kubenswrapper[4751]: E1123 04:11:51.981237 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n686h668hd5h59bhbbh65dh64dh69h5c9h687h675hdbh5d7hcch585h86h9fh54dh54ch58bh697h54h67ch88h96h54fh5c8hf8h79h5d5h55fh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbtcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7b369c41-886d-44cc-821b-2d415431f9ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.033841 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.040285 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.044442 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.075200 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087642 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087685 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087745 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs\") pod \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087767 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087782 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087809 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts\") pod \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data\") pod \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087902 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087940 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqq7n\" (UniqueName: \"kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n\") pod \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.087977 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46gwd\" (UniqueName: \"kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088000 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf8rc\" (UniqueName: \"kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc\") pod \"da36c954-d8e7-425d-be39-822bfc9ed7cd\" (UID: \"da36c954-d8e7-425d-be39-822bfc9ed7cd\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key\") pod \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\" (UID: \"3f54d651-41da-46a8-a8c3-27fb1ad249b8\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088065 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088087 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088122 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.088141 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs\") pod \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\" (UID: \"24d654bf-e6c3-4579-b509-c5f49d9d9da9\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.093187 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs" (OuterVolumeSpecName: "logs") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.094489 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs" (OuterVolumeSpecName: "logs") pod "3f54d651-41da-46a8-a8c3-27fb1ad249b8" (UID: "3f54d651-41da-46a8-a8c3-27fb1ad249b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.094642 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts" (OuterVolumeSpecName: "scripts") pod "3f54d651-41da-46a8-a8c3-27fb1ad249b8" (UID: "3f54d651-41da-46a8-a8c3-27fb1ad249b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.094669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data" (OuterVolumeSpecName: "config-data") pod "3f54d651-41da-46a8-a8c3-27fb1ad249b8" (UID: "3f54d651-41da-46a8-a8c3-27fb1ad249b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.095171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.099442 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd" (OuterVolumeSpecName: "kube-api-access-46gwd") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "kube-api-access-46gwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.105404 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3f54d651-41da-46a8-a8c3-27fb1ad249b8" (UID: "3f54d651-41da-46a8-a8c3-27fb1ad249b8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.105409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc" (OuterVolumeSpecName: "kube-api-access-qf8rc") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "kube-api-access-qf8rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.110572 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts" (OuterVolumeSpecName: "scripts") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.117185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.128396 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n" (OuterVolumeSpecName: "kube-api-access-nqq7n") pod "3f54d651-41da-46a8-a8c3-27fb1ad249b8" (UID: "3f54d651-41da-46a8-a8c3-27fb1ad249b8"). InnerVolumeSpecName "kube-api-access-nqq7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.141747 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.143812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data" (OuterVolumeSpecName: "config-data") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.159621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.168660 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.172758 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "24d654bf-e6c3-4579-b509-c5f49d9d9da9" (UID: "24d654bf-e6c3-4579-b509-c5f49d9d9da9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.175958 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.182237 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.185951 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config" (OuterVolumeSpecName: "config") pod "da36c954-d8e7-425d-be39-822bfc9ed7cd" (UID: "da36c954-d8e7-425d-be39-822bfc9ed7cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.189952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190078 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190120 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190141 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190192 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190258 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlzkq\" (UniqueName: \"kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190320 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs\") pod \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\" (UID: \"35d1922e-4ae6-48d2-bfc4-d5acee2909dc\") " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.190873 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs" (OuterVolumeSpecName: "logs") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191117 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191141 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191156 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191171 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqq7n\" (UniqueName: \"kubernetes.io/projected/3f54d651-41da-46a8-a8c3-27fb1ad249b8-kube-api-access-nqq7n\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191183 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46gwd\" (UniqueName: \"kubernetes.io/projected/24d654bf-e6c3-4579-b509-c5f49d9d9da9-kube-api-access-46gwd\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191194 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191207 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf8rc\" (UniqueName: \"kubernetes.io/projected/da36c954-d8e7-425d-be39-822bfc9ed7cd-kube-api-access-qf8rc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191219 4751 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3f54d651-41da-46a8-a8c3-27fb1ad249b8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191251 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191263 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191275 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191286 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191298 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191309 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191320 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191340 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da36c954-d8e7-425d-be39-822bfc9ed7cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191368 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f54d651-41da-46a8-a8c3-27fb1ad249b8-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191379 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d654bf-e6c3-4579-b509-c5f49d9d9da9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191391 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24d654bf-e6c3-4579-b509-c5f49d9d9da9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191402 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.191413 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f54d651-41da-46a8-a8c3-27fb1ad249b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.194115 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.194501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq" (OuterVolumeSpecName: "kube-api-access-qlzkq") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "kube-api-access-qlzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.196141 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts" (OuterVolumeSpecName: "scripts") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.212666 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.215770 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.232501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data" (OuterVolumeSpecName: "config-data") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.232813 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35d1922e-4ae6-48d2-bfc4-d5acee2909dc" (UID: "35d1922e-4ae6-48d2-bfc4-d5acee2909dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.292899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlzkq\" (UniqueName: \"kubernetes.io/projected/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-kube-api-access-qlzkq\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293123 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293183 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293238 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293290 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293382 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.293437 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35d1922e-4ae6-48d2-bfc4-d5acee2909dc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.315813 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.395717 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.507901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7595df9dd7-gj6cm" event={"ID":"66bd4f0d-d90e-4991-adb5-2844154b5de2","Type":"ContainerDied","Data":"1e058a2bb08661c028236032fdedaca6ac3031fff221ddd2159387588e4a7afc"} Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.508025 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7595df9dd7-gj6cm" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.512760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6594b8fbdc-sj2cr" event={"ID":"3f54d651-41da-46a8-a8c3-27fb1ad249b8","Type":"ContainerDied","Data":"12e82121806b67272ff595b9d8b0cb59cfd279bdc95e594bcbd1824cd89b8cfb"} Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.512850 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6594b8fbdc-sj2cr" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.516863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"24d654bf-e6c3-4579-b509-c5f49d9d9da9","Type":"ContainerDied","Data":"5d6cdc6366ccd7b6e6c8fa7b0c90193596dea0b41aa77c940882f16e610e04d6"} Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.516896 4751 scope.go:117] "RemoveContainer" containerID="e356a99eefd1c2d6381154c98915792d93df9d9d1cc5f68617e2bd25a3802a88" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.516988 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.519872 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" event={"ID":"da36c954-d8e7-425d-be39-822bfc9ed7cd","Type":"ContainerDied","Data":"66d40f17e4ab81a8cd100d034857a49e95ac636c0a6168e8f8f7c9cceb45ed8a"} Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.519929 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.528128 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35d1922e-4ae6-48d2-bfc4-d5acee2909dc","Type":"ContainerDied","Data":"bfed6211eb7e059045fbf61a494a671c7a6b46f4e5721d882e9bd9679937b89b"} Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.528219 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.563995 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.597529 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.616743 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617216 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617231 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617260 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617267 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617283 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="init" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617291 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="init" Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617305 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617312 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617329 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: E1123 04:11:52.617415 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.617425 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.618120 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.618141 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-log" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.618158 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.618169 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.618185 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" containerName="glance-httpd" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.619092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.621494 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.622765 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.622931 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hp66g" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.623090 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.623216 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.632370 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7595df9dd7-gj6cm"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.668496 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d654bf-e6c3-4579-b509-c5f49d9d9da9" path="/var/lib/kubelet/pods/24d654bf-e6c3-4579-b509-c5f49d9d9da9/volumes" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.681980 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bd4f0d-d90e-4991-adb5-2844154b5de2" path="/var/lib/kubelet/pods/66bd4f0d-d90e-4991-adb5-2844154b5de2/volumes" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.684122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.685052 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.689249 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-j2zjh"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.697540 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.702132 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.702186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5nf\" (UniqueName: \"kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.702241 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.702365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.703240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.703272 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.703326 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.703432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.710533 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.738413 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.739818 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.741427 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.741622 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.747586 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.768570 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.775289 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6594b8fbdc-sj2cr"] Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807423 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlb5\" (UniqueName: \"kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807880 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807926 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807970 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.807993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808081 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808105 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5nf\" (UniqueName: \"kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808133 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808211 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808236 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.808289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.811457 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.811668 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.813167 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.813829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.813847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.819398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.819867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.836968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5nf\" (UniqueName: \"kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.860248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " pod="openstack/glance-default-external-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913365 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlb5\" (UniqueName: \"kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913506 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913529 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913632 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.913676 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.914642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.914775 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.915056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.920641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.926033 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.938024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.949121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.955989 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlb5\" (UniqueName: \"kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:52 crc kubenswrapper[4751]: I1123 04:11:52.983612 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.016791 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.052973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:11:53 crc kubenswrapper[4751]: E1123 04:11:53.579430 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 23 04:11:53 crc kubenswrapper[4751]: E1123 04:11:53.582816 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fs7k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-slzk9_openstack(37ead28c-46bc-4415-a35c-1d3d8de722dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.583237 4751 scope.go:117] "RemoveContainer" containerID="363d1b58cf8351b5d558a330725b3590c91acc8ebd7c609784eedc445af15c43" Nov 23 04:11:53 crc kubenswrapper[4751]: E1123 04:11:53.584484 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-slzk9" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.790386 4751 scope.go:117] "RemoveContainer" containerID="a9c9532667578a1611aafa6fa827a1ba2915b682379869554cf4f542563fa52d" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.817631 4751 scope.go:117] "RemoveContainer" containerID="f6a62e6961c9fd0271349bb9b75edd15434f701db8b3512478550a53b7215faf" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.878632 4751 scope.go:117] "RemoveContainer" containerID="ac2025c8c995213d6bf9670543566c2b921d7ea67d14cd018e56eda7b219e418" Nov 23 04:11:53 crc kubenswrapper[4751]: I1123 04:11:53.930554 4751 scope.go:117] "RemoveContainer" containerID="34e893a9416bed0be0f2e7894c64c88ed24ca7ec01e4658f0afd266f44ce3142" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.023505 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.057802 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789489d584-slcs8"] Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.191164 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vzfxr"] Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.256194 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:11:54 crc kubenswrapper[4751]: W1123 04:11:54.331965 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402436df_c2b0_435a_8fed_4d88a3af1e40.slice/crio-90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b WatchSource:0}: Error finding container 90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b: Status 404 returned error can't find the container with id 90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b Nov 23 04:11:54 crc kubenswrapper[4751]: W1123 04:11:54.341486 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce82842_e359_4824_abb2_6c652caf36ca.slice/crio-e4a836d9b29795ac09ee427deff51ed5333fdb46195035d2d9aa87de9308326a WatchSource:0}: Error finding container e4a836d9b29795ac09ee427deff51ed5333fdb46195035d2d9aa87de9308326a: Status 404 returned error can't find the container with id e4a836d9b29795ac09ee427deff51ed5333fdb46195035d2d9aa87de9308326a Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.368741 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.548565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerStarted","Data":"2bb69210774a9b4297b16a3e21e54a8b7cfde957ccaf528736003e241e277fd9"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.552841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerStarted","Data":"9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.552866 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerStarted","Data":"9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.552968 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84c958854f-wnwfq" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon-log" containerID="cri-o://9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf" gracePeriod=30 Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.553434 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84c958854f-wnwfq" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon" containerID="cri-o://9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3" gracePeriod=30 Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.556923 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vzfxr" event={"ID":"402436df-c2b0-435a-8fed-4d88a3af1e40","Type":"ContainerStarted","Data":"90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.565094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerStarted","Data":"e4a836d9b29795ac09ee427deff51ed5333fdb46195035d2d9aa87de9308326a"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.566414 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789489d584-slcs8" event={"ID":"49f1490c-4b27-47c0-bc36-688b467ebe2c","Type":"ContainerStarted","Data":"541d2f545b30d04be31445d954462082539a9d6b2c7d840b7d4059f65e7ab129"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.579205 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84c958854f-wnwfq" podStartSLOduration=2.6767662960000003 podStartE2EDuration="30.579188666s" podCreationTimestamp="2025-11-23 04:11:24 +0000 UTC" firstStartedPulling="2025-11-23 04:11:25.631223258 +0000 UTC m=+981.824894617" lastFinishedPulling="2025-11-23 04:11:53.533645618 +0000 UTC m=+1009.727316987" observedRunningTime="2025-11-23 04:11:54.573057294 +0000 UTC m=+1010.766728653" watchObservedRunningTime="2025-11-23 04:11:54.579188666 +0000 UTC m=+1010.772860025" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.580680 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerStarted","Data":"89e2f7ba4f87f5aaad3af50dc0c85cbc89ec8f5d3cd8c0ea76ab105f426919aa"} Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.585629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmh2" event={"ID":"ee0ce204-8f54-4c8c-98a6-f18f36873339","Type":"ContainerStarted","Data":"2259914a36e96ca06baa494295879beb76786d10ab91f69b144288b95aac8399"} Nov 23 04:11:54 crc kubenswrapper[4751]: E1123 04:11:54.587121 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-slzk9" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.620988 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hhmh2" podStartSLOduration=3.038175037 podStartE2EDuration="32.62096495s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="2025-11-23 04:11:23.928602667 +0000 UTC m=+980.122274026" lastFinishedPulling="2025-11-23 04:11:53.51139256 +0000 UTC m=+1009.705063939" observedRunningTime="2025-11-23 04:11:54.615216668 +0000 UTC m=+1010.808888027" watchObservedRunningTime="2025-11-23 04:11:54.62096495 +0000 UTC m=+1010.814636309" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.671075 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d1922e-4ae6-48d2-bfc4-d5acee2909dc" path="/var/lib/kubelet/pods/35d1922e-4ae6-48d2-bfc4-d5acee2909dc/volumes" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.671988 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f54d651-41da-46a8-a8c3-27fb1ad249b8" path="/var/lib/kubelet/pods/3f54d651-41da-46a8-a8c3-27fb1ad249b8/volumes" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.672380 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" path="/var/lib/kubelet/pods/da36c954-d8e7-425d-be39-822bfc9ed7cd/volumes" Nov 23 04:11:54 crc kubenswrapper[4751]: I1123 04:11:54.914889 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.402133 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-j2zjh" podUID="da36c954-d8e7-425d-be39-822bfc9ed7cd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.599270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerStarted","Data":"9479c96ecd501a93127a0cba95158d6fc07b3db5c33697a330901fef10a6c446"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.599339 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerStarted","Data":"7e524121e462318f5ee78e48f212c7932519de6fda8d0e8b7d02daab42da34b7"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.602715 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerStarted","Data":"6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.605340 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789489d584-slcs8" event={"ID":"49f1490c-4b27-47c0-bc36-688b467ebe2c","Type":"ContainerStarted","Data":"fcb014f97618653897c43bc3579ef964b60f8dc4e28bd9457642f1b9b1b0aee5"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.605402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789489d584-slcs8" event={"ID":"49f1490c-4b27-47c0-bc36-688b467ebe2c","Type":"ContainerStarted","Data":"95f1b8d7bd135e34d3ea0ad914b9bf0053b696273657de7e645d06990f16af26"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.608530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerStarted","Data":"68a049f1251d1fb6d133440d77c5d2354bb40477c13fb49e64759fea660944c0"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.614202 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerStarted","Data":"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.616974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vzfxr" event={"ID":"402436df-c2b0-435a-8fed-4d88a3af1e40","Type":"ContainerStarted","Data":"52fb127f56ef4f0527086e4443a35cad61449f1ed141487b4b474cee1baaeb50"} Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.623426 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bc45b664d-wh6ld" podStartSLOduration=24.623410978 podStartE2EDuration="24.623410978s" podCreationTimestamp="2025-11-23 04:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:55.619800383 +0000 UTC m=+1011.813471762" watchObservedRunningTime="2025-11-23 04:11:55.623410978 +0000 UTC m=+1011.817082337" Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.649107 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vzfxr" podStartSLOduration=14.649083827 podStartE2EDuration="14.649083827s" podCreationTimestamp="2025-11-23 04:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:55.646206071 +0000 UTC m=+1011.839877450" watchObservedRunningTime="2025-11-23 04:11:55.649083827 +0000 UTC m=+1011.842755196" Nov 23 04:11:55 crc kubenswrapper[4751]: I1123 04:11:55.683947 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-789489d584-slcs8" podStartSLOduration=24.683919877 podStartE2EDuration="24.683919877s" podCreationTimestamp="2025-11-23 04:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:55.670639456 +0000 UTC m=+1011.864310835" watchObservedRunningTime="2025-11-23 04:11:55.683919877 +0000 UTC m=+1011.877591236" Nov 23 04:11:56 crc kubenswrapper[4751]: I1123 04:11:56.637617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerStarted","Data":"9102cb5ee21962a7721f1fda43287ab7696cf30ee6bcf956226b218e9199c8f4"} Nov 23 04:11:56 crc kubenswrapper[4751]: I1123 04:11:56.654001 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerStarted","Data":"3fd994d1a4d18b2d5cca8f908653e334be8ba7c78ce97f28bc3e17d2c2c40c7a"} Nov 23 04:11:56 crc kubenswrapper[4751]: I1123 04:11:56.689550 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.68953271 podStartE2EDuration="4.68953271s" podCreationTimestamp="2025-11-23 04:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:56.669652065 +0000 UTC m=+1012.863323424" watchObservedRunningTime="2025-11-23 04:11:56.68953271 +0000 UTC m=+1012.883204069" Nov 23 04:11:56 crc kubenswrapper[4751]: I1123 04:11:56.711362 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.711329846 podStartE2EDuration="4.711329846s" podCreationTimestamp="2025-11-23 04:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:11:56.702292647 +0000 UTC m=+1012.895964006" watchObservedRunningTime="2025-11-23 04:11:56.711329846 +0000 UTC m=+1012.905001205" Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.722922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q65sd" event={"ID":"4ffd47c6-ed23-4bc3-be63-dd817807dc3e","Type":"ContainerStarted","Data":"965f7e2ca2d2e35b337bbf8191496d92baeed52f1c850233659f271ae60e5e82"} Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.725565 4751 generic.go:334] "Generic (PLEG): container finished" podID="402436df-c2b0-435a-8fed-4d88a3af1e40" containerID="52fb127f56ef4f0527086e4443a35cad61449f1ed141487b4b474cee1baaeb50" exitCode=0 Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.725626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vzfxr" event={"ID":"402436df-c2b0-435a-8fed-4d88a3af1e40","Type":"ContainerDied","Data":"52fb127f56ef4f0527086e4443a35cad61449f1ed141487b4b474cee1baaeb50"} Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.740951 4751 generic.go:334] "Generic (PLEG): container finished" podID="d4181f6c-4f0a-41fb-af82-f7f10f85c117" containerID="ea8e5ba87f241184fa7a06b0e02fc44c9e26afc85a9c23c55a4b55105a6208fa" exitCode=0 Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.741030 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jkfd7" event={"ID":"d4181f6c-4f0a-41fb-af82-f7f10f85c117","Type":"ContainerDied","Data":"ea8e5ba87f241184fa7a06b0e02fc44c9e26afc85a9c23c55a4b55105a6208fa"} Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.745322 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-q65sd" podStartSLOduration=3.078213515 podStartE2EDuration="38.745308541s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="2025-11-23 04:11:23.927131118 +0000 UTC m=+980.120802477" lastFinishedPulling="2025-11-23 04:11:59.594226104 +0000 UTC m=+1015.787897503" observedRunningTime="2025-11-23 04:12:00.737694169 +0000 UTC m=+1016.931365528" watchObservedRunningTime="2025-11-23 04:12:00.745308541 +0000 UTC m=+1016.938979900" Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.753533 4751 generic.go:334] "Generic (PLEG): container finished" podID="ee0ce204-8f54-4c8c-98a6-f18f36873339" containerID="2259914a36e96ca06baa494295879beb76786d10ab91f69b144288b95aac8399" exitCode=0 Nov 23 04:12:00 crc kubenswrapper[4751]: I1123 04:12:00.756706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmh2" event={"ID":"ee0ce204-8f54-4c8c-98a6-f18f36873339","Type":"ContainerDied","Data":"2259914a36e96ca06baa494295879beb76786d10ab91f69b144288b95aac8399"} Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.017434 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.017741 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.104440 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.105476 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.715393 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.777369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hhmh2" event={"ID":"ee0ce204-8f54-4c8c-98a6-f18f36873339","Type":"ContainerDied","Data":"59c58ab702740765b3f4f688188125812509bb3438cd8cf60407128bb82d4c80"} Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.777408 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c58ab702740765b3f4f688188125812509bb3438cd8cf60407128bb82d4c80" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.778925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vzfxr" event={"ID":"402436df-c2b0-435a-8fed-4d88a3af1e40","Type":"ContainerDied","Data":"90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b"} Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.778945 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90dffdff831f81ec106a662f05452347ab773a6c99014df4e1bcb7fcd0c2b01b" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.779018 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vzfxr" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.784950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jkfd7" event={"ID":"d4181f6c-4f0a-41fb-af82-f7f10f85c117","Type":"ContainerDied","Data":"bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0"} Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.784995 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb41ab633aeb4cfebad17d25ffb16d044ab46adda60193229e5ffce6dcd25e0" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.794090 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.869691 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.870021 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.870071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.870115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.870136 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.870432 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lc7s\" (UniqueName: \"kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s\") pod \"402436df-c2b0-435a-8fed-4d88a3af1e40\" (UID: \"402436df-c2b0-435a-8fed-4d88a3af1e40\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.888049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts" (OuterVolumeSpecName: "scripts") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.888535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s" (OuterVolumeSpecName: "kube-api-access-4lc7s") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "kube-api-access-4lc7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.889183 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66b57bb577-p2b4n"] Nov 23 04:12:02 crc kubenswrapper[4751]: E1123 04:12:02.889570 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402436df-c2b0-435a-8fed-4d88a3af1e40" containerName="keystone-bootstrap" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.889589 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="402436df-c2b0-435a-8fed-4d88a3af1e40" containerName="keystone-bootstrap" Nov 23 04:12:02 crc kubenswrapper[4751]: E1123 04:12:02.889611 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4181f6c-4f0a-41fb-af82-f7f10f85c117" containerName="neutron-db-sync" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.889617 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4181f6c-4f0a-41fb-af82-f7f10f85c117" containerName="neutron-db-sync" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.889797 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4181f6c-4f0a-41fb-af82-f7f10f85c117" containerName="neutron-db-sync" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.890263 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="402436df-c2b0-435a-8fed-4d88a3af1e40" containerName="keystone-bootstrap" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.890393 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.890414 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.890842 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.895129 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.895148 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.906193 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66b57bb577-p2b4n"] Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.937372 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmh2" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.940278 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.944585 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data" (OuterVolumeSpecName: "config-data") pod "402436df-c2b0-435a-8fed-4d88a3af1e40" (UID: "402436df-c2b0-435a-8fed-4d88a3af1e40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.973591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle\") pod \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.973688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config\") pod \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.973768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9dw\" (UniqueName: \"kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw\") pod \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\" (UID: \"d4181f6c-4f0a-41fb-af82-f7f10f85c117\") " Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974162 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974180 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974191 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lc7s\" (UniqueName: \"kubernetes.io/projected/402436df-c2b0-435a-8fed-4d88a3af1e40-kube-api-access-4lc7s\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974204 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974214 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.974223 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402436df-c2b0-435a-8fed-4d88a3af1e40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:02 crc kubenswrapper[4751]: I1123 04:12:02.977766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw" (OuterVolumeSpecName: "kube-api-access-zd9dw") pod "d4181f6c-4f0a-41fb-af82-f7f10f85c117" (UID: "d4181f6c-4f0a-41fb-af82-f7f10f85c117"). InnerVolumeSpecName "kube-api-access-zd9dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.013336 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config" (OuterVolumeSpecName: "config") pod "d4181f6c-4f0a-41fb-af82-f7f10f85c117" (UID: "d4181f6c-4f0a-41fb-af82-f7f10f85c117"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.017450 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4181f6c-4f0a-41fb-af82-f7f10f85c117" (UID: "d4181f6c-4f0a-41fb-af82-f7f10f85c117"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.017541 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.017573 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.054332 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.054426 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.056999 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.068229 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.075677 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts\") pod \"ee0ce204-8f54-4c8c-98a6-f18f36873339\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.075757 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle\") pod \"ee0ce204-8f54-4c8c-98a6-f18f36873339\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.075801 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x79lk\" (UniqueName: \"kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk\") pod \"ee0ce204-8f54-4c8c-98a6-f18f36873339\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.075923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs\") pod \"ee0ce204-8f54-4c8c-98a6-f18f36873339\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.075994 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data\") pod \"ee0ce204-8f54-4c8c-98a6-f18f36873339\" (UID: \"ee0ce204-8f54-4c8c-98a6-f18f36873339\") " Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-fernet-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-credential-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-scripts\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-internal-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp5g\" (UniqueName: \"kubernetes.io/projected/bf363ce8-cc62-4c00-90f1-adfe3e26e834-kube-api-access-nvp5g\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076333 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-config-data\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-combined-ca-bundle\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076416 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-public-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076470 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9dw\" (UniqueName: \"kubernetes.io/projected/d4181f6c-4f0a-41fb-af82-f7f10f85c117-kube-api-access-zd9dw\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076481 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.076490 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4181f6c-4f0a-41fb-af82-f7f10f85c117-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.080608 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs" (OuterVolumeSpecName: "logs") pod "ee0ce204-8f54-4c8c-98a6-f18f36873339" (UID: "ee0ce204-8f54-4c8c-98a6-f18f36873339"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.081516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk" (OuterVolumeSpecName: "kube-api-access-x79lk") pod "ee0ce204-8f54-4c8c-98a6-f18f36873339" (UID: "ee0ce204-8f54-4c8c-98a6-f18f36873339"). InnerVolumeSpecName "kube-api-access-x79lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.089789 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.091954 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts" (OuterVolumeSpecName: "scripts") pod "ee0ce204-8f54-4c8c-98a6-f18f36873339" (UID: "ee0ce204-8f54-4c8c-98a6-f18f36873339"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.102427 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee0ce204-8f54-4c8c-98a6-f18f36873339" (UID: "ee0ce204-8f54-4c8c-98a6-f18f36873339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.106073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.110497 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data" (OuterVolumeSpecName: "config-data") pod "ee0ce204-8f54-4c8c-98a6-f18f36873339" (UID: "ee0ce204-8f54-4c8c-98a6-f18f36873339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-credential-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177624 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-scripts\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177652 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-internal-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp5g\" (UniqueName: \"kubernetes.io/projected/bf363ce8-cc62-4c00-90f1-adfe3e26e834-kube-api-access-nvp5g\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-config-data\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-combined-ca-bundle\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-public-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.177944 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-fernet-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.178011 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee0ce204-8f54-4c8c-98a6-f18f36873339-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.178025 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.178035 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.178047 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ce204-8f54-4c8c-98a6-f18f36873339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.178058 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x79lk\" (UniqueName: \"kubernetes.io/projected/ee0ce204-8f54-4c8c-98a6-f18f36873339-kube-api-access-x79lk\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.182268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-credential-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.183214 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-combined-ca-bundle\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.184506 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-config-data\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.184916 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-internal-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.186862 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-fernet-keys\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.188918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-public-tls-certs\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.189308 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf363ce8-cc62-4c00-90f1-adfe3e26e834-scripts\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.199169 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp5g\" (UniqueName: \"kubernetes.io/projected/bf363ce8-cc62-4c00-90f1-adfe3e26e834-kube-api-access-nvp5g\") pod \"keystone-66b57bb577-p2b4n\" (UID: \"bf363ce8-cc62-4c00-90f1-adfe3e26e834\") " pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.257058 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.681122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66b57bb577-p2b4n"] Nov 23 04:12:03 crc kubenswrapper[4751]: W1123 04:12:03.688295 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf363ce8_cc62_4c00_90f1_adfe3e26e834.slice/crio-5f91441574d13e0762bfb6d7e5c3d87c2ddfbfd38f49ff4e8d82b96d18f75a8e WatchSource:0}: Error finding container 5f91441574d13e0762bfb6d7e5c3d87c2ddfbfd38f49ff4e8d82b96d18f75a8e: Status 404 returned error can't find the container with id 5f91441574d13e0762bfb6d7e5c3d87c2ddfbfd38f49ff4e8d82b96d18f75a8e Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.812399 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" containerID="965f7e2ca2d2e35b337bbf8191496d92baeed52f1c850233659f271ae60e5e82" exitCode=0 Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.812461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q65sd" event={"ID":"4ffd47c6-ed23-4bc3-be63-dd817807dc3e","Type":"ContainerDied","Data":"965f7e2ca2d2e35b337bbf8191496d92baeed52f1c850233659f271ae60e5e82"} Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.821530 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66b57bb577-p2b4n" event={"ID":"bf363ce8-cc62-4c00-90f1-adfe3e26e834","Type":"ContainerStarted","Data":"5f91441574d13e0762bfb6d7e5c3d87c2ddfbfd38f49ff4e8d82b96d18f75a8e"} Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.831494 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hhmh2" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.832963 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jkfd7" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.836453 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerStarted","Data":"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f"} Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.837003 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.838677 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.838700 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.838800 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.970133 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:03 crc kubenswrapper[4751]: E1123 04:12:03.970792 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0ce204-8f54-4c8c-98a6-f18f36873339" containerName="placement-db-sync" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.970808 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0ce204-8f54-4c8c-98a6-f18f36873339" containerName="placement-db-sync" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.971052 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0ce204-8f54-4c8c-98a6-f18f36873339" containerName="placement-db-sync" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.974606 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:03 crc kubenswrapper[4751]: I1123 04:12:03.986883 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.091779 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.093135 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.115315 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.120438 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d9c6b99fd-4x95v"] Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.115507 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.115524 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xtrvh" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.115746 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.130117 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.130240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.130291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncz62\" (UniqueName: \"kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.130324 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.151976 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.152146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.136813 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.146760 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.156813 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.157068 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.157145 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.157559 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.157068 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mh4tl" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.207623 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d9c6b99fd-4x95v"] Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.254060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dslv\" (UniqueName: \"kubernetes.io/projected/748e93b6-b72d-4fd1-8542-b37b5d4d7031-kube-api-access-2dslv\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.254114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnl8h\" (UniqueName: \"kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.254150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255384 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-scripts\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255560 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-internal-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255604 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748e93b6-b72d-4fd1-8542-b37b5d4d7031-logs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-combined-ca-bundle\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255665 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-config-data\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255690 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255724 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255750 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-public-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255804 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncz62\" (UniqueName: \"kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.255980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.257121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.257193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.257542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.257550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.302541 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncz62\" (UniqueName: \"kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62\") pod \"dnsmasq-dns-6b7b667979-shxnw\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748e93b6-b72d-4fd1-8542-b37b5d4d7031-logs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357805 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-combined-ca-bundle\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357822 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-config-data\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357845 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-public-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357897 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dslv\" (UniqueName: \"kubernetes.io/projected/748e93b6-b72d-4fd1-8542-b37b5d4d7031-kube-api-access-2dslv\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.357982 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnl8h\" (UniqueName: \"kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.358011 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.358026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-scripts\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.358065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-internal-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.358253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748e93b6-b72d-4fd1-8542-b37b5d4d7031-logs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.362614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-internal-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.363216 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-scripts\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.363609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-combined-ca-bundle\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.365493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.369045 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.370051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-config-data\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.370678 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/748e93b6-b72d-4fd1-8542-b37b5d4d7031-public-tls-certs\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.370715 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.370726 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.382947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dslv\" (UniqueName: \"kubernetes.io/projected/748e93b6-b72d-4fd1-8542-b37b5d4d7031-kube-api-access-2dslv\") pod \"placement-6d9c6b99fd-4x95v\" (UID: \"748e93b6-b72d-4fd1-8542-b37b5d4d7031\") " pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.383631 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnl8h\" (UniqueName: \"kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h\") pod \"neutron-78fd8b465b-jwbnc\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.522152 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.554016 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.600988 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.877982 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66b57bb577-p2b4n" event={"ID":"bf363ce8-cc62-4c00-90f1-adfe3e26e834","Type":"ContainerStarted","Data":"97484f7379135062f1be33861e5a49a2b3ed17342af69d4c8493fe4db18caa43"} Nov 23 04:12:04 crc kubenswrapper[4751]: I1123 04:12:04.911393 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-66b57bb577-p2b4n" podStartSLOduration=2.911207482 podStartE2EDuration="2.911207482s" podCreationTimestamp="2025-11-23 04:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:04.891239724 +0000 UTC m=+1021.084911083" watchObservedRunningTime="2025-11-23 04:12:04.911207482 +0000 UTC m=+1021.104878841" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.207880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.259485 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d9c6b99fd-4x95v"] Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.552919 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.553562 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q65sd" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.692005 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle\") pod \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.692105 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wxbw\" (UniqueName: \"kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw\") pod \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.692152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data\") pod \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\" (UID: \"4ffd47c6-ed23-4bc3-be63-dd817807dc3e\") " Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.704040 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw" (OuterVolumeSpecName: "kube-api-access-7wxbw") pod "4ffd47c6-ed23-4bc3-be63-dd817807dc3e" (UID: "4ffd47c6-ed23-4bc3-be63-dd817807dc3e"). InnerVolumeSpecName "kube-api-access-7wxbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.707038 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ffd47c6-ed23-4bc3-be63-dd817807dc3e" (UID: "4ffd47c6-ed23-4bc3-be63-dd817807dc3e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.766714 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ffd47c6-ed23-4bc3-be63-dd817807dc3e" (UID: "4ffd47c6-ed23-4bc3-be63-dd817807dc3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.794781 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.794829 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wxbw\" (UniqueName: \"kubernetes.io/projected/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-kube-api-access-7wxbw\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.794841 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ffd47c6-ed23-4bc3-be63-dd817807dc3e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.907673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d9c6b99fd-4x95v" event={"ID":"748e93b6-b72d-4fd1-8542-b37b5d4d7031","Type":"ContainerStarted","Data":"09b952534fa49d7452ea46cf3be89b7a868141d9d58fa0c856b82fd7bb448563"} Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.921995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" event={"ID":"19fc4685-a555-4653-8702-8a3e03e6a8b3","Type":"ContainerStarted","Data":"ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991"} Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.932637 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerStarted","Data":"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8"} Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.932686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerStarted","Data":"a3f38d203fbe99e0c069cea59a8ce292fb5c592e04b58352457ec2baa1291805"} Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.962737 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.962760 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.963446 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q65sd" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.963569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q65sd" event={"ID":"4ffd47c6-ed23-4bc3-be63-dd817807dc3e","Type":"ContainerDied","Data":"1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9"} Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.963596 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce01dd4b57ae884254f35e240d21d3006da5bf93add4876ebe0b5eaeb4e0fc9" Nov 23 04:12:05 crc kubenswrapper[4751]: I1123 04:12:05.963961 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.260430 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86c6898967-wnj7z"] Nov 23 04:12:06 crc kubenswrapper[4751]: E1123 04:12:06.261620 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" containerName="barbican-db-sync" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.261637 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" containerName="barbican-db-sync" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.261991 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" containerName="barbican-db-sync" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.285233 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.300622 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.300928 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-s4sk6" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.301056 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.308490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-combined-ca-bundle\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.308603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1babe827-384d-4185-90fb-021a93e62b38-logs\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.308636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data-custom\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.308698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvxv\" (UniqueName: \"kubernetes.io/projected/1babe827-384d-4185-90fb-021a93e62b38-kube-api-access-thvxv\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.308731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.312067 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86c6898967-wnj7z"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.337619 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65dbdd4878-rj2c4"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.340330 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.352157 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65dbdd4878-rj2c4"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.371323 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.396276 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.412910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-combined-ca-bundle\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.412997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-combined-ca-bundle\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413064 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5sv\" (UniqueName: \"kubernetes.io/projected/e290c0f4-7b34-4063-a8f8-aa5123762b03-kube-api-access-8z5sv\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413090 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1babe827-384d-4185-90fb-021a93e62b38-logs\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data-custom\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413128 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data-custom\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413175 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvxv\" (UniqueName: \"kubernetes.io/projected/1babe827-384d-4185-90fb-021a93e62b38-kube-api-access-thvxv\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413195 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.413212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e290c0f4-7b34-4063-a8f8-aa5123762b03-logs\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.415812 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1babe827-384d-4185-90fb-021a93e62b38-logs\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.424035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-combined-ca-bundle\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.427928 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data-custom\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.431825 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1babe827-384d-4185-90fb-021a93e62b38-config-data\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.431895 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.440595 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvxv\" (UniqueName: \"kubernetes.io/projected/1babe827-384d-4185-90fb-021a93e62b38-kube-api-access-thvxv\") pod \"barbican-worker-86c6898967-wnj7z\" (UID: \"1babe827-384d-4185-90fb-021a93e62b38\") " pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.450469 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.478652 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.496531 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.498476 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.507643 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.515466 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516491 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516597 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516710 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5sv\" (UniqueName: \"kubernetes.io/projected/e290c0f4-7b34-4063-a8f8-aa5123762b03-kube-api-access-8z5sv\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxhl\" (UniqueName: \"kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516911 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.516981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data-custom\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517160 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e290c0f4-7b34-4063-a8f8-aa5123762b03-logs\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517243 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-combined-ca-bundle\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517506 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.517929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e290c0f4-7b34-4063-a8f8-aa5123762b03-logs\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.547392 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.548736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-config-data-custom\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.549900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e290c0f4-7b34-4063-a8f8-aa5123762b03-combined-ca-bundle\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.550737 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5sv\" (UniqueName: \"kubernetes.io/projected/e290c0f4-7b34-4063-a8f8-aa5123762b03-kube-api-access-8z5sv\") pod \"barbican-keystone-listener-65dbdd4878-rj2c4\" (UID: \"e290c0f4-7b34-4063-a8f8-aa5123762b03\") " pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621598 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621624 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621709 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxhl\" (UniqueName: \"kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621788 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.621803 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xrt\" (UniqueName: \"kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.623954 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.623954 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.627013 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.627986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.630707 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64b84b8669-6xvhn"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.630935 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.632192 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.635804 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.636291 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.655664 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxhl\" (UniqueName: \"kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl\") pod \"dnsmasq-dns-848cf88cfc-v6fg9\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.673090 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b84b8669-6xvhn"] Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.673462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86c6898967-wnj7z" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-internal-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-combined-ca-bundle\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728691 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-ovndb-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-public-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7cj\" (UniqueName: \"kubernetes.io/projected/61f0356e-5917-45f1-86a3-75f15d10ac71-kube-api-access-cq7cj\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728926 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.728998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xrt\" (UniqueName: \"kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.729017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-httpd-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.730520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.732057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.738856 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.740024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.749748 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.750464 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.758994 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.764248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xrt\" (UniqueName: \"kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt\") pod \"barbican-api-6fb764c846-4s62f\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.785735 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.805064 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.805340 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.811860 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.830955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-httpd-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831014 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-internal-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-combined-ca-bundle\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831090 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831114 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-ovndb-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831139 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-public-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.831215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq7cj\" (UniqueName: \"kubernetes.io/projected/61f0356e-5917-45f1-86a3-75f15d10ac71-kube-api-access-cq7cj\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.839042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-combined-ca-bundle\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.841005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-httpd-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.841465 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.846196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-config\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.848405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-ovndb-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.848902 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-public-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.865460 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq7cj\" (UniqueName: \"kubernetes.io/projected/61f0356e-5917-45f1-86a3-75f15d10ac71-kube-api-access-cq7cj\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:06 crc kubenswrapper[4751]: I1123 04:12:06.874981 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61f0356e-5917-45f1-86a3-75f15d10ac71-internal-tls-certs\") pod \"neutron-64b84b8669-6xvhn\" (UID: \"61f0356e-5917-45f1-86a3-75f15d10ac71\") " pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.037063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.041755 4751 generic.go:334] "Generic (PLEG): container finished" podID="19fc4685-a555-4653-8702-8a3e03e6a8b3" containerID="54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301" exitCode=0 Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.042553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" event={"ID":"19fc4685-a555-4653-8702-8a3e03e6a8b3","Type":"ContainerDied","Data":"54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301"} Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.046298 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerStarted","Data":"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd"} Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.046829 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.059645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d9c6b99fd-4x95v" event={"ID":"748e93b6-b72d-4fd1-8542-b37b5d4d7031","Type":"ContainerStarted","Data":"e9db28098ceea53fa41c7c93c9a7856295f0252ba742829e709269de2003003f"} Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.059674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d9c6b99fd-4x95v" event={"ID":"748e93b6-b72d-4fd1-8542-b37b5d4d7031","Type":"ContainerStarted","Data":"d0c1ec471dd91057075acb9ec005122ca2e836745cabb1e2fcdadb8e7a69987c"} Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.059686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.060469 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.125310 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78fd8b465b-jwbnc" podStartSLOduration=3.125273226 podStartE2EDuration="3.125273226s" podCreationTimestamp="2025-11-23 04:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:07.093118767 +0000 UTC m=+1023.286790126" watchObservedRunningTime="2025-11-23 04:12:07.125273226 +0000 UTC m=+1023.318944585" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.200748 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d9c6b99fd-4x95v" podStartSLOduration=3.20072264 podStartE2EDuration="3.20072264s" podCreationTimestamp="2025-11-23 04:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:07.118654931 +0000 UTC m=+1023.312326290" watchObservedRunningTime="2025-11-23 04:12:07.20072264 +0000 UTC m=+1023.394394009" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.532501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86c6898967-wnj7z"] Nov 23 04:12:07 crc kubenswrapper[4751]: E1123 04:12:07.641806 4751 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 23 04:12:07 crc kubenswrapper[4751]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/19fc4685-a555-4653-8702-8a3e03e6a8b3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 04:12:07 crc kubenswrapper[4751]: > podSandboxID="ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991" Nov 23 04:12:07 crc kubenswrapper[4751]: E1123 04:12:07.641966 4751 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 23 04:12:07 crc kubenswrapper[4751]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n548h78h598h67bh67chf7h67dhch88h5bh596h546h5b9h5b7h57hd9h58h5dbh674hfch548h549h99hbch5c9h546h687h687h648hd4h8bh98q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ncz62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6b7b667979-shxnw_openstack(19fc4685-a555-4653-8702-8a3e03e6a8b3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/19fc4685-a555-4653-8702-8a3e03e6a8b3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 04:12:07 crc kubenswrapper[4751]: > logger="UnhandledError" Nov 23 04:12:07 crc kubenswrapper[4751]: E1123 04:12:07.643783 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/19fc4685-a555-4653-8702-8a3e03e6a8b3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" podUID="19fc4685-a555-4653-8702-8a3e03e6a8b3" Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.695324 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.732740 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65dbdd4878-rj2c4"] Nov 23 04:12:07 crc kubenswrapper[4751]: I1123 04:12:07.956072 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.066271 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b84b8669-6xvhn"] Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.079841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6898967-wnj7z" event={"ID":"1babe827-384d-4185-90fb-021a93e62b38","Type":"ContainerStarted","Data":"43bfb1df62bbdfafaf558a44fea36fd6ce41282b0a8236fbcd9c371fa29d6f44"} Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.084971 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerStarted","Data":"4bd9ca9eca2a96ac80182aa12a003b11387657a03dab4ed1b61a628b2d732af6"} Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.100836 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" event={"ID":"542a9266-a029-4159-9512-8e10600dfb46","Type":"ContainerStarted","Data":"9059e324844cb1ecb567173a97ded2070f0c6015313ad2d97b2a1375c76a54fb"} Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.105370 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" event={"ID":"e290c0f4-7b34-4063-a8f8-aa5123762b03","Type":"ContainerStarted","Data":"017ad7266aa839f92e6e69ca4be6edfe567bf74b94ed2c7b5c892c6dbf8f7e1d"} Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.524544 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.688706 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.689058 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.689083 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.689161 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.689218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.689275 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncz62\" (UniqueName: \"kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62\") pod \"19fc4685-a555-4653-8702-8a3e03e6a8b3\" (UID: \"19fc4685-a555-4653-8702-8a3e03e6a8b3\") " Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.697627 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62" (OuterVolumeSpecName: "kube-api-access-ncz62") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "kube-api-access-ncz62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.791917 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncz62\" (UniqueName: \"kubernetes.io/projected/19fc4685-a555-4653-8702-8a3e03e6a8b3-kube-api-access-ncz62\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.844037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config" (OuterVolumeSpecName: "config") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.851151 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.895702 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.895736 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.951841 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:08 crc kubenswrapper[4751]: I1123 04:12:08.966535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:08.997108 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:08.997135 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.018935 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "19fc4685-a555-4653-8702-8a3e03e6a8b3" (UID: "19fc4685-a555-4653-8702-8a3e03e6a8b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.149939 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/19fc4685-a555-4653-8702-8a3e03e6a8b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.189014 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b84b8669-6xvhn" event={"ID":"61f0356e-5917-45f1-86a3-75f15d10ac71","Type":"ContainerStarted","Data":"31488ad1742895de2df94ed140af4f8bd1c61e54e308126dc579e3a3851952e1"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.189059 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b84b8669-6xvhn" event={"ID":"61f0356e-5917-45f1-86a3-75f15d10ac71","Type":"ContainerStarted","Data":"9c0e47bc30108ce991f026a06c3ac4c7a63da0361733a1b1eefb7fbe0ea9c1de"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.192200 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" event={"ID":"19fc4685-a555-4653-8702-8a3e03e6a8b3","Type":"ContainerDied","Data":"ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.192233 4751 scope.go:117] "RemoveContainer" containerID="54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.192358 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-shxnw" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.211603 4751 generic.go:334] "Generic (PLEG): container finished" podID="542a9266-a029-4159-9512-8e10600dfb46" containerID="7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4" exitCode=0 Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.211746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" event={"ID":"542a9266-a029-4159-9512-8e10600dfb46","Type":"ContainerDied","Data":"7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.225324 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slzk9" event={"ID":"37ead28c-46bc-4415-a35c-1d3d8de722dd","Type":"ContainerStarted","Data":"3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.296490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerStarted","Data":"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.296730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerStarted","Data":"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17"} Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.296743 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.296778 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.300423 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.316196 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-shxnw"] Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.324616 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-slzk9" podStartSLOduration=3.692546558 podStartE2EDuration="47.324588472s" podCreationTimestamp="2025-11-23 04:11:22 +0000 UTC" firstStartedPulling="2025-11-23 04:11:23.766847623 +0000 UTC m=+979.960518972" lastFinishedPulling="2025-11-23 04:12:07.398889527 +0000 UTC m=+1023.592560886" observedRunningTime="2025-11-23 04:12:09.305490267 +0000 UTC m=+1025.499161626" watchObservedRunningTime="2025-11-23 04:12:09.324588472 +0000 UTC m=+1025.518259821" Nov 23 04:12:09 crc kubenswrapper[4751]: I1123 04:12:09.346836 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fb764c846-4s62f" podStartSLOduration=3.3468165 podStartE2EDuration="3.3468165s" podCreationTimestamp="2025-11-23 04:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:09.336464766 +0000 UTC m=+1025.530136135" watchObservedRunningTime="2025-11-23 04:12:09.3468165 +0000 UTC m=+1025.540487859" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.310906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b84b8669-6xvhn" event={"ID":"61f0356e-5917-45f1-86a3-75f15d10ac71","Type":"ContainerStarted","Data":"17b2a8313db4ea9b927e66046ff20700b502809ed4c1538d39e9f8d0c599acd2"} Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.312231 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.326743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" event={"ID":"542a9266-a029-4159-9512-8e10600dfb46","Type":"ContainerStarted","Data":"539108f2455714d29035d73da756a01b46a0859d989aea077908301113d08039"} Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.327040 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.362631 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64b84b8669-6xvhn" podStartSLOduration=4.3626081899999996 podStartE2EDuration="4.36260819s" podCreationTimestamp="2025-11-23 04:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:10.355270037 +0000 UTC m=+1026.548941386" watchObservedRunningTime="2025-11-23 04:12:10.36260819 +0000 UTC m=+1026.556279559" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.404240 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b8f7cfdb6-q2828"] Nov 23 04:12:10 crc kubenswrapper[4751]: E1123 04:12:10.405114 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc4685-a555-4653-8702-8a3e03e6a8b3" containerName="init" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.405140 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc4685-a555-4653-8702-8a3e03e6a8b3" containerName="init" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.405608 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fc4685-a555-4653-8702-8a3e03e6a8b3" containerName="init" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.413531 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" podStartSLOduration=4.413505315 podStartE2EDuration="4.413505315s" podCreationTimestamp="2025-11-23 04:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:10.407760434 +0000 UTC m=+1026.601431793" watchObservedRunningTime="2025-11-23 04:12:10.413505315 +0000 UTC m=+1026.607176674" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.423129 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.430142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.430474 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.472658 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8f7cfdb6-q2828"] Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-public-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7gk\" (UniqueName: \"kubernetes.io/projected/356c133f-02f2-453d-a0a4-018aa4741eee-kube-api-access-bd7gk\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data-custom\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-internal-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356c133f-02f2-453d-a0a4-018aa4741eee-logs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503615 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-combined-ca-bundle\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.503719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data-custom\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605428 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-internal-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356c133f-02f2-453d-a0a4-018aa4741eee-logs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605480 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-combined-ca-bundle\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605512 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-public-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.605590 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7gk\" (UniqueName: \"kubernetes.io/projected/356c133f-02f2-453d-a0a4-018aa4741eee-kube-api-access-bd7gk\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.606013 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/356c133f-02f2-453d-a0a4-018aa4741eee-logs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.612509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.612846 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-public-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.614912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-internal-tls-certs\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.614986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-config-data-custom\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.615509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/356c133f-02f2-453d-a0a4-018aa4741eee-combined-ca-bundle\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.640717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7gk\" (UniqueName: \"kubernetes.io/projected/356c133f-02f2-453d-a0a4-018aa4741eee-kube-api-access-bd7gk\") pod \"barbican-api-6b8f7cfdb6-q2828\" (UID: \"356c133f-02f2-453d-a0a4-018aa4741eee\") " pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.658843 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc4685-a555-4653-8702-8a3e03e6a8b3" path="/var/lib/kubelet/pods/19fc4685-a555-4653-8702-8a3e03e6a8b3/volumes" Nov 23 04:12:10 crc kubenswrapper[4751]: I1123 04:12:10.760203 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.018495 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.104578 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789489d584-slcs8" podUID="49f1490c-4b27-47c0-bc36-688b467ebe2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.191835 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b8f7cfdb6-q2828"] Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.346275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8f7cfdb6-q2828" event={"ID":"356c133f-02f2-453d-a0a4-018aa4741eee","Type":"ContainerStarted","Data":"306af3214076a53654ca66ec4290be4de5a6ccba9f56a70ba9d1e19cea9db7ec"} Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.350451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6898967-wnj7z" event={"ID":"1babe827-384d-4185-90fb-021a93e62b38","Type":"ContainerStarted","Data":"6fc20617e3c66b6ad438032ba59132d635c342d71cb4fcbd9a5b90a2694d0376"} Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.350489 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6898967-wnj7z" event={"ID":"1babe827-384d-4185-90fb-021a93e62b38","Type":"ContainerStarted","Data":"ac8474c6263e54625b80c4aab1e3d480b18c6c6106ed0d55b48b8f8874eef2bc"} Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.359069 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" event={"ID":"e290c0f4-7b34-4063-a8f8-aa5123762b03","Type":"ContainerStarted","Data":"25c44efa072cd163ef29539fd976dc3e0a95119d3e98a4bc327c5e1caf3e86bd"} Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.359145 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" event={"ID":"e290c0f4-7b34-4063-a8f8-aa5123762b03","Type":"ContainerStarted","Data":"06c34ef4b0bd3d0fc0f6a8fd2835f2d4361fbe2646afe7c5ab6c82ab65cbac55"} Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.389725 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65dbdd4878-rj2c4" podStartSLOduration=2.483751934 podStartE2EDuration="6.389710876s" podCreationTimestamp="2025-11-23 04:12:06 +0000 UTC" firstStartedPulling="2025-11-23 04:12:07.758454918 +0000 UTC m=+1023.952126277" lastFinishedPulling="2025-11-23 04:12:11.66441386 +0000 UTC m=+1027.858085219" observedRunningTime="2025-11-23 04:12:12.387206039 +0000 UTC m=+1028.580877398" watchObservedRunningTime="2025-11-23 04:12:12.389710876 +0000 UTC m=+1028.583382235" Nov 23 04:12:12 crc kubenswrapper[4751]: I1123 04:12:12.394020 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86c6898967-wnj7z" podStartSLOduration=2.369976097 podStartE2EDuration="6.394013269s" podCreationTimestamp="2025-11-23 04:12:06 +0000 UTC" firstStartedPulling="2025-11-23 04:12:07.641552229 +0000 UTC m=+1023.835223588" lastFinishedPulling="2025-11-23 04:12:11.665589401 +0000 UTC m=+1027.859260760" observedRunningTime="2025-11-23 04:12:12.372543222 +0000 UTC m=+1028.566214581" watchObservedRunningTime="2025-11-23 04:12:12.394013269 +0000 UTC m=+1028.587684628" Nov 23 04:12:13 crc kubenswrapper[4751]: I1123 04:12:13.369660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8f7cfdb6-q2828" event={"ID":"356c133f-02f2-453d-a0a4-018aa4741eee","Type":"ContainerStarted","Data":"938a576d52cc411cd5e1c30ae5bcbe7a5317bf46ba14214fdb1622ef646e8bb9"} Nov 23 04:12:13 crc kubenswrapper[4751]: I1123 04:12:13.369901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b8f7cfdb6-q2828" event={"ID":"356c133f-02f2-453d-a0a4-018aa4741eee","Type":"ContainerStarted","Data":"6d640a0181ea096825c81088a0d9afed76aa98191ab1f0f8aacc547b71d844e0"} Nov 23 04:12:14 crc kubenswrapper[4751]: I1123 04:12:14.391570 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:14 crc kubenswrapper[4751]: I1123 04:12:14.391616 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:16 crc kubenswrapper[4751]: I1123 04:12:16.787291 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:16 crc kubenswrapper[4751]: I1123 04:12:16.806807 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b8f7cfdb6-q2828" podStartSLOduration=6.806786474 podStartE2EDuration="6.806786474s" podCreationTimestamp="2025-11-23 04:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:13.392721599 +0000 UTC m=+1029.586392958" watchObservedRunningTime="2025-11-23 04:12:16.806786474 +0000 UTC m=+1033.000457833" Nov 23 04:12:16 crc kubenswrapper[4751]: I1123 04:12:16.864077 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:12:16 crc kubenswrapper[4751]: I1123 04:12:16.865252 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="dnsmasq-dns" containerID="cri-o://f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6" gracePeriod=10 Nov 23 04:12:17 crc kubenswrapper[4751]: I1123 04:12:17.443387 4751 generic.go:334] "Generic (PLEG): container finished" podID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerID="f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6" exitCode=0 Nov 23 04:12:17 crc kubenswrapper[4751]: I1123 04:12:17.443632 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" event={"ID":"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab","Type":"ContainerDied","Data":"f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6"} Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.283293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.302801 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:18 crc kubenswrapper[4751]: E1123 04:12:18.392912 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441508 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjpwk\" (UniqueName: \"kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441787 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.441884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config\") pod \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\" (UID: \"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab\") " Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.457840 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.460871 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk" (OuterVolumeSpecName: "kube-api-access-cjpwk") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "kube-api-access-cjpwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.466840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" event={"ID":"cfb27df9-70a5-4ef4-bd96-0257f7c5dbab","Type":"ContainerDied","Data":"d7fab97b623fccad4d291ad3dfedddcb440e48caf37ae5a7ef8d01b2e33f910f"} Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.466911 4751 scope.go:117] "RemoveContainer" containerID="f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.466855 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-tbgwf" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.501102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerStarted","Data":"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4"} Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.501579 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="ceilometer-notification-agent" containerID="cri-o://725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649" gracePeriod=30 Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.501815 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.502049 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="proxy-httpd" containerID="cri-o://e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4" gracePeriod=30 Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.502090 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="sg-core" containerID="cri-o://be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f" gracePeriod=30 Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.518406 4751 generic.go:334] "Generic (PLEG): container finished" podID="37ead28c-46bc-4415-a35c-1d3d8de722dd" containerID="3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5" exitCode=0 Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.518563 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slzk9" event={"ID":"37ead28c-46bc-4415-a35c-1d3d8de722dd","Type":"ContainerDied","Data":"3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5"} Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.519853 4751 scope.go:117] "RemoveContainer" containerID="01bcc21bd41c4c3f0f4a64e58c77446c651c4a893096fa98a58e0a82686b2ce7" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.543853 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjpwk\" (UniqueName: \"kubernetes.io/projected/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-kube-api-access-cjpwk\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.563907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.563804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.581576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.582306 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.595935 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config" (OuterVolumeSpecName: "config") pod "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" (UID: "cfb27df9-70a5-4ef4-bd96-0257f7c5dbab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.645549 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.645580 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.645591 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.645599 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.645608 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.786560 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:12:18 crc kubenswrapper[4751]: I1123 04:12:18.794353 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-tbgwf"] Nov 23 04:12:19 crc kubenswrapper[4751]: I1123 04:12:19.535969 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b369c41-886d-44cc-821b-2d415431f9ec" containerID="e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4" exitCode=0 Nov 23 04:12:19 crc kubenswrapper[4751]: I1123 04:12:19.536340 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b369c41-886d-44cc-821b-2d415431f9ec" containerID="be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f" exitCode=2 Nov 23 04:12:19 crc kubenswrapper[4751]: I1123 04:12:19.536064 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerDied","Data":"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4"} Nov 23 04:12:19 crc kubenswrapper[4751]: I1123 04:12:19.536461 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerDied","Data":"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f"} Nov 23 04:12:19 crc kubenswrapper[4751]: I1123 04:12:19.933122 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slzk9" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080491 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080574 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080717 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.080869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs7k8\" (UniqueName: \"kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8\") pod \"37ead28c-46bc-4415-a35c-1d3d8de722dd\" (UID: \"37ead28c-46bc-4415-a35c-1d3d8de722dd\") " Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.081898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.086393 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts" (OuterVolumeSpecName: "scripts") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.092230 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8" (OuterVolumeSpecName: "kube-api-access-fs7k8") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "kube-api-access-fs7k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.108217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.120516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.140821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data" (OuterVolumeSpecName: "config-data") pod "37ead28c-46bc-4415-a35c-1d3d8de722dd" (UID: "37ead28c-46bc-4415-a35c-1d3d8de722dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183268 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183296 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs7k8\" (UniqueName: \"kubernetes.io/projected/37ead28c-46bc-4415-a35c-1d3d8de722dd-kube-api-access-fs7k8\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183309 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183321 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37ead28c-46bc-4415-a35c-1d3d8de722dd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183332 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.183366 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ead28c-46bc-4415-a35c-1d3d8de722dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.553219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-slzk9" event={"ID":"37ead28c-46bc-4415-a35c-1d3d8de722dd","Type":"ContainerDied","Data":"aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d"} Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.553276 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.553389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-slzk9" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.660206 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" path="/var/lib/kubelet/pods/cfb27df9-70a5-4ef4-bd96-0257f7c5dbab/volumes" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936039 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:20 crc kubenswrapper[4751]: E1123 04:12:20.936419 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="dnsmasq-dns" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936439 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="dnsmasq-dns" Nov 23 04:12:20 crc kubenswrapper[4751]: E1123 04:12:20.936451 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" containerName="cinder-db-sync" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936457 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" containerName="cinder-db-sync" Nov 23 04:12:20 crc kubenswrapper[4751]: E1123 04:12:20.936483 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="init" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936490 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="init" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936788 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" containerName="cinder-db-sync" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.936813 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb27df9-70a5-4ef4-bd96-0257f7c5dbab" containerName="dnsmasq-dns" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.937736 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.940459 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nr5bj" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.940679 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.940817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.940933 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.951923 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.996918 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:12:20 crc kubenswrapper[4751]: I1123 04:12:20.998411 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.015560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107137 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107413 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107461 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107517 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnsr4\" (UniqueName: \"kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zxm\" (UniqueName: \"kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107587 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.107629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211624 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnsr4\" (UniqueName: \"kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211707 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zxm\" (UniqueName: \"kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.211864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.212884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.216142 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.216805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.217114 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.217709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.218233 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.227293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.228601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.240271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.250988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.252863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnsr4\" (UniqueName: \"kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4\") pod \"cinder-scheduler-0\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.266225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zxm\" (UniqueName: \"kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm\") pod \"dnsmasq-dns-6578955fd5-9scfw\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.270632 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.313252 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.315337 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.325180 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.341682 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.417870 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97psx\" (UniqueName: \"kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.448888 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.506273 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522395 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522493 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.522662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97psx\" (UniqueName: \"kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.538899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.539439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.539704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.543456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.544922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97psx\" (UniqueName: \"kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.545261 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.546646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom\") pod \"cinder-api-0\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.588515 4751 generic.go:334] "Generic (PLEG): container finished" podID="7b369c41-886d-44cc-821b-2d415431f9ec" containerID="725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649" exitCode=0 Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.588839 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerDied","Data":"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649"} Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.588867 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b369c41-886d-44cc-821b-2d415431f9ec","Type":"ContainerDied","Data":"aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017"} Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.588884 4751 scope.go:117] "RemoveContainer" containerID="e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.589019 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.626977 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbtcm\" (UniqueName: \"kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627051 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627071 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627093 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627164 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627186 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.627225 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts\") pod \"7b369c41-886d-44cc-821b-2d415431f9ec\" (UID: \"7b369c41-886d-44cc-821b-2d415431f9ec\") " Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.628161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.628564 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.633839 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts" (OuterVolumeSpecName: "scripts") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.637527 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm" (OuterVolumeSpecName: "kube-api-access-sbtcm") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "kube-api-access-sbtcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.648822 4751 scope.go:117] "RemoveContainer" containerID="be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.671161 4751 scope.go:117] "RemoveContainer" containerID="725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.679984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.700919 4751 scope.go:117] "RemoveContainer" containerID="e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4" Nov 23 04:12:21 crc kubenswrapper[4751]: E1123 04:12:21.704490 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4\": container with ID starting with e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4 not found: ID does not exist" containerID="e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.704547 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4"} err="failed to get container status \"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4\": rpc error: code = NotFound desc = could not find container \"e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4\": container with ID starting with e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4 not found: ID does not exist" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.704574 4751 scope.go:117] "RemoveContainer" containerID="be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f" Nov 23 04:12:21 crc kubenswrapper[4751]: E1123 04:12:21.707678 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f\": container with ID starting with be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f not found: ID does not exist" containerID="be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.707720 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f"} err="failed to get container status \"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f\": rpc error: code = NotFound desc = could not find container \"be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f\": container with ID starting with be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f not found: ID does not exist" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.707745 4751 scope.go:117] "RemoveContainer" containerID="725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.713727 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: E1123 04:12:21.715167 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649\": container with ID starting with 725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649 not found: ID does not exist" containerID="725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.715235 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649"} err="failed to get container status \"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649\": rpc error: code = NotFound desc = could not find container \"725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649\": container with ID starting with 725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649 not found: ID does not exist" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729314 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729385 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbtcm\" (UniqueName: \"kubernetes.io/projected/7b369c41-886d-44cc-821b-2d415431f9ec-kube-api-access-sbtcm\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data" (OuterVolumeSpecName: "config-data") pod "7b369c41-886d-44cc-821b-2d415431f9ec" (UID: "7b369c41-886d-44cc-821b-2d415431f9ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729698 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729712 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b369c41-886d-44cc-821b-2d415431f9ec-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729720 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.729729 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.784841 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.831998 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b369c41-886d-44cc-821b-2d415431f9ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.852241 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:21 crc kubenswrapper[4751]: I1123 04:12:21.994702 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.004610 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.014547 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: E1123 04:12:22.015445 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="sg-core" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015462 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="sg-core" Nov 23 04:12:22 crc kubenswrapper[4751]: E1123 04:12:22.015477 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="proxy-httpd" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015506 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="proxy-httpd" Nov 23 04:12:22 crc kubenswrapper[4751]: E1123 04:12:22.015521 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="ceilometer-notification-agent" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015528 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="ceilometer-notification-agent" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015809 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="sg-core" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015828 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="proxy-httpd" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.015846 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" containerName="ceilometer-notification-agent" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.017648 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.025252 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.025335 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.025592 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.033052 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.136549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.136977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.137067 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.137147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9km\" (UniqueName: \"kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.137175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.137234 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.137370 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240325 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240385 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9km\" (UniqueName: \"kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240507 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.240562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.241326 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.241843 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.245404 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.245552 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.245779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.250679 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.260189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9km\" (UniqueName: \"kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km\") pod \"ceilometer-0\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.340906 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.364700 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.641202 4751 generic.go:334] "Generic (PLEG): container finished" podID="c0605451-7647-4783-8601-9662f5c14868" containerID="a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54" exitCode=0 Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.641272 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" event={"ID":"c0605451-7647-4783-8601-9662f5c14868","Type":"ContainerDied","Data":"a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54"} Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.641298 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" event={"ID":"c0605451-7647-4783-8601-9662f5c14868","Type":"ContainerStarted","Data":"ef1c143beee664876b2ed288c5359053928a5f5e1776a01542403dfaec87dbdd"} Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.659115 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b369c41-886d-44cc-821b-2d415431f9ec" path="/var/lib/kubelet/pods/7b369c41-886d-44cc-821b-2d415431f9ec/volumes" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.660016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerStarted","Data":"5c4d3a945fe23ddd046485bc7929343b530f755cd309101e730a94991e93ad32"} Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.660036 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerStarted","Data":"a693c307839d39207ed5bf62570fb3a1ec4f3b3db391a754f7253cbf49a56341"} Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.817657 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.889299 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:22 crc kubenswrapper[4751]: I1123 04:12:22.955993 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b8f7cfdb6-q2828" Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.018312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.022630 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb764c846-4s62f" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api-log" containerID="cri-o://5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17" gracePeriod=30 Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.023409 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb764c846-4s62f" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" containerID="cri-o://6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b" gracePeriod=30 Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.043243 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fb764c846-4s62f" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.416339 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.723475 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerStarted","Data":"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd"} Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.759686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerStarted","Data":"93d8e921bc0758f7542cd499c3e8dfcc45ff90d8e29a6c552b39eec943feaf2c"} Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.765565 4751 generic.go:334] "Generic (PLEG): container finished" podID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerID="5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17" exitCode=143 Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.765635 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerDied","Data":"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17"} Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.784428 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" event={"ID":"c0605451-7647-4783-8601-9662f5c14868","Type":"ContainerStarted","Data":"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce"} Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.784490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:23 crc kubenswrapper[4751]: I1123 04:12:23.804206 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" podStartSLOduration=3.804164724 podStartE2EDuration="3.804164724s" podCreationTimestamp="2025-11-23 04:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:23.804108362 +0000 UTC m=+1039.997779721" watchObservedRunningTime="2025-11-23 04:12:23.804164724 +0000 UTC m=+1039.997836083" Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.527695 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.535735 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.725365 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991 WatchSource:0}: Error finding container ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991: Status 404 returned error can't find the container with id ed292843b7c22937bb68f5667cd4403142c896cefd1fe73b50aa759c524f8991 Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.746935 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301.scope WatchSource:0}: Error finding container 54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301: Status 404 returned error can't find the container with id 54fd9f9a5856f4cf8fcb28540ca4bdb883024191d5384f61c35210f768df5301 Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.762679 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-conmon-89e820b7e7595bf3b168317dc239629bd756a6fce4139ef0f8d816322682f33f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-conmon-89e820b7e7595bf3b168317dc239629bd756a6fce4139ef0f8d816322682f33f.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.762766 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-89e820b7e7595bf3b168317dc239629bd756a6fce4139ef0f8d816322682f33f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice/crio-89e820b7e7595bf3b168317dc239629bd756a6fce4139ef0f8d816322682f33f.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.779818 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod542a9266_a029_4159_9512_8e10600dfb46.slice/crio-conmon-7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod542a9266_a029_4159_9512_8e10600dfb46.slice/crio-conmon-7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.779877 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod542a9266_a029_4159_9512_8e10600dfb46.slice/crio-7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod542a9266_a029_4159_9512_8e10600dfb46.slice/crio-7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.779896 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-conmon-3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-conmon-3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: W1123 04:12:24.779916 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5.scope: no such file or directory Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.839649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerStarted","Data":"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.839897 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api-log" containerID="cri-o://0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" gracePeriod=30 Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.840176 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.840259 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api" containerID="cri-o://74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" gracePeriod=30 Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.868666 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerStarted","Data":"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.868716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerStarted","Data":"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.870719 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.870697406 podStartE2EDuration="3.870697406s" podCreationTimestamp="2025-11-23 04:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:24.864858121 +0000 UTC m=+1041.058529480" watchObservedRunningTime="2025-11-23 04:12:24.870697406 +0000 UTC m=+1041.064368755" Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.880575 4751 generic.go:334] "Generic (PLEG): container finished" podID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerID="9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3" exitCode=137 Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.880844 4751 generic.go:334] "Generic (PLEG): container finished" podID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerID="9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf" exitCode=137 Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.880881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerDied","Data":"9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.880905 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerDied","Data":"9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.898133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerStarted","Data":"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2"} Nov 23 04:12:24 crc kubenswrapper[4751]: I1123 04:12:24.945493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.007229379 podStartE2EDuration="4.945470681s" podCreationTimestamp="2025-11-23 04:12:20 +0000 UTC" firstStartedPulling="2025-11-23 04:12:21.854750762 +0000 UTC m=+1038.048422121" lastFinishedPulling="2025-11-23 04:12:22.792992064 +0000 UTC m=+1038.986663423" observedRunningTime="2025-11-23 04:12:24.929709035 +0000 UTC m=+1041.123380394" watchObservedRunningTime="2025-11-23 04:12:24.945470681 +0000 UTC m=+1041.139142050" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.103668 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.248085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts\") pod \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.248152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs\") pod \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.248282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data\") pod \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.248333 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwk2j\" (UniqueName: \"kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j\") pod \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.248417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key\") pod \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\" (UID: \"9f395bca-24d1-4b23-8eb2-782713ee4d9a\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.251790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs" (OuterVolumeSpecName: "logs") pod "9f395bca-24d1-4b23-8eb2-782713ee4d9a" (UID: "9f395bca-24d1-4b23-8eb2-782713ee4d9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: E1123 04:12:25.259128 4751 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/16da52824ff0930c6c54f81bb0e0885b1df497408f1500edef7dcad8f3b974a9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/16da52824ff0930c6c54f81bb0e0885b1df497408f1500edef7dcad8f3b974a9/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-56df8fb6b7-tbgwf_cfb27df9-70a5-4ef4-bd96-0257f7c5dbab/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-56df8fb6b7-tbgwf_cfb27df9-70a5-4ef4-bd96-0257f7c5dbab/dnsmasq-dns/0.log: no such file or directory Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.276745 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9f395bca-24d1-4b23-8eb2-782713ee4d9a" (UID: "9f395bca-24d1-4b23-8eb2-782713ee4d9a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.288539 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j" (OuterVolumeSpecName: "kube-api-access-zwk2j") pod "9f395bca-24d1-4b23-8eb2-782713ee4d9a" (UID: "9f395bca-24d1-4b23-8eb2-782713ee4d9a"). InnerVolumeSpecName "kube-api-access-zwk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.307413 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data" (OuterVolumeSpecName: "config-data") pod "9f395bca-24d1-4b23-8eb2-782713ee4d9a" (UID: "9f395bca-24d1-4b23-8eb2-782713ee4d9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.320035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts" (OuterVolumeSpecName: "scripts") pod "9f395bca-24d1-4b23-8eb2-782713ee4d9a" (UID: "9f395bca-24d1-4b23-8eb2-782713ee4d9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.353750 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.353777 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f395bca-24d1-4b23-8eb2-782713ee4d9a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.353787 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f395bca-24d1-4b23-8eb2-782713ee4d9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.353798 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwk2j\" (UniqueName: \"kubernetes.io/projected/9f395bca-24d1-4b23-8eb2-782713ee4d9a-kube-api-access-zwk2j\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.353808 4751 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f395bca-24d1-4b23-8eb2-782713ee4d9a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.659205 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.761832 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.761893 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.761915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.761955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97psx\" (UniqueName: \"kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762077 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762158 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id\") pod \"7e616ddc-3fc0-451e-a375-fb3e447497a1\" (UID: \"7e616ddc-3fc0-451e-a375-fb3e447497a1\") " Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762361 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs" (OuterVolumeSpecName: "logs") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762720 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762880 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7e616ddc-3fc0-451e-a375-fb3e447497a1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.762897 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e616ddc-3fc0-451e-a375-fb3e447497a1-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.767820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts" (OuterVolumeSpecName: "scripts") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.767879 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx" (OuterVolumeSpecName: "kube-api-access-97psx") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "kube-api-access-97psx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.768496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.801460 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.822536 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data" (OuterVolumeSpecName: "config-data") pod "7e616ddc-3fc0-451e-a375-fb3e447497a1" (UID: "7e616ddc-3fc0-451e-a375-fb3e447497a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.864751 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.864782 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.864793 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97psx\" (UniqueName: \"kubernetes.io/projected/7e616ddc-3fc0-451e-a375-fb3e447497a1-kube-api-access-97psx\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.864802 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.864810 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7e616ddc-3fc0-451e-a375-fb3e447497a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.907061 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84c958854f-wnwfq" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.907048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84c958854f-wnwfq" event={"ID":"9f395bca-24d1-4b23-8eb2-782713ee4d9a","Type":"ContainerDied","Data":"0f3f4047ae7c7280921109033cc7d0625d880cfc49c6687aa8a7ba6cb821b9a6"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.907221 4751 scope.go:117] "RemoveContainer" containerID="9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.908935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerStarted","Data":"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.914133 4751 generic.go:334] "Generic (PLEG): container finished" podID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerID="74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" exitCode=0 Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.914245 4751 generic.go:334] "Generic (PLEG): container finished" podID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerID="0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" exitCode=143 Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.914576 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.914967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerDied","Data":"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.915010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerDied","Data":"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.915021 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7e616ddc-3fc0-451e-a375-fb3e447497a1","Type":"ContainerDied","Data":"a693c307839d39207ed5bf62570fb3a1ec4f3b3db391a754f7253cbf49a56341"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.916908 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerStarted","Data":"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b"} Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.946420 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.962059 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84c958854f-wnwfq"] Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.972127 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:25 crc kubenswrapper[4751]: I1123 04:12:25.983286 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.003410 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.003813 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon-log" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.003830 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon-log" Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.003844 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.003852 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api" Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.003865 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api-log" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.003871 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api-log" Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.003891 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.003897 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.004054 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.004068 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon-log" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.004111 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" containerName="cinder-api-log" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.004128 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" containerName="horizon" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.005088 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.013929 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.014168 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.014279 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.017388 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.073114 4751 scope.go:117] "RemoveContainer" containerID="9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.099902 4751 scope.go:117] "RemoveContainer" containerID="74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.130205 4751 scope.go:117] "RemoveContainer" containerID="0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.149161 4751 scope.go:117] "RemoveContainer" containerID="74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.149706 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6\": container with ID starting with 74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6 not found: ID does not exist" containerID="74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.149740 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6"} err="failed to get container status \"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6\": rpc error: code = NotFound desc = could not find container \"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6\": container with ID starting with 74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6 not found: ID does not exist" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.149765 4751 scope.go:117] "RemoveContainer" containerID="0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" Nov 23 04:12:26 crc kubenswrapper[4751]: E1123 04:12:26.150186 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd\": container with ID starting with 0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd not found: ID does not exist" containerID="0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.150230 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd"} err="failed to get container status \"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd\": rpc error: code = NotFound desc = could not find container \"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd\": container with ID starting with 0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd not found: ID does not exist" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.150258 4751 scope.go:117] "RemoveContainer" containerID="74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.150538 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6"} err="failed to get container status \"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6\": rpc error: code = NotFound desc = could not find container \"74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6\": container with ID starting with 74d5aa2c2e688214190c6dd101705736c04c73df5963f1188effdf921fe9ccd6 not found: ID does not exist" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.150558 4751 scope.go:117] "RemoveContainer" containerID="0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.150797 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd"} err="failed to get container status \"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd\": rpc error: code = NotFound desc = could not find container \"0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd\": container with ID starting with 0bdf78de5c140c3f5ae26d415f002f9d6bc44326fda272dd01f6b679906163fd not found: ID does not exist" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data-custom\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169853 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-logs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169907 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn74v\" (UniqueName: \"kubernetes.io/projected/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-kube-api-access-fn74v\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-scripts\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.169966 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.170010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272243 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn74v\" (UniqueName: \"kubernetes.io/projected/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-kube-api-access-fn74v\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272520 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-scripts\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272609 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.272952 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.273062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data-custom\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.273147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-logs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.273601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-logs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.273705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.274681 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.278452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-scripts\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.278799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.280061 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.280914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data-custom\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.285014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.300580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-config-data\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.302900 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn74v\" (UniqueName: \"kubernetes.io/projected/84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f-kube-api-access-fn74v\") pod \"cinder-api-0\" (UID: \"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f\") " pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.330263 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.424912 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.656082 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e616ddc-3fc0-451e-a375-fb3e447497a1" path="/var/lib/kubelet/pods/7e616ddc-3fc0-451e-a375-fb3e447497a1/volumes" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.657266 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f395bca-24d1-4b23-8eb2-782713ee4d9a" path="/var/lib/kubelet/pods/9f395bca-24d1-4b23-8eb2-782713ee4d9a/volumes" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.768090 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.929507 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-789489d584-slcs8" Nov 23 04:12:26 crc kubenswrapper[4751]: I1123 04:12:26.943138 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f","Type":"ContainerStarted","Data":"484ab4f1440687f2e7ec13f71269bb01f604853e970c938627f9701d466fa5d5"} Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.011990 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.014102 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" containerID="cri-o://9479c96ecd501a93127a0cba95158d6fc07b3db5c33697a330901fef10a6c446" gracePeriod=30 Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.014827 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon-log" containerID="cri-o://7e524121e462318f5ee78e48f212c7932519de6fda8d0e8b7d02daab42da34b7" gracePeriod=30 Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.478563 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb764c846-4s62f" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:39306->10.217.0.159:9311: read: connection reset by peer" Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.478600 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fb764c846-4s62f" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:39310->10.217.0.159:9311: read: connection reset by peer" Nov 23 04:12:27 crc kubenswrapper[4751]: W1123 04:12:27.585930 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-conmon-e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-conmon-e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4.scope: no such file or directory Nov 23 04:12:27 crc kubenswrapper[4751]: W1123 04:12:27.586293 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-e3a624d28a2a012a6730aade604a50f8e623330822c5442cd8bea9ad6e67fcb4.scope: no such file or directory Nov 23 04:12:27 crc kubenswrapper[4751]: W1123 04:12:27.624399 4751 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e616ddc_3fc0_451e_a375_fb3e447497a1.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e616ddc_3fc0_451e_a375_fb3e447497a1.slice: no such file or directory Nov 23 04:12:27 crc kubenswrapper[4751]: E1123 04:12:27.804545 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-aef38b47615f7eb6446d09e702c3d83c81aaf1332d3005e1f772fcd88e238017\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc4685_a555_4653_8702_8a3e03e6a8b3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ead28c_46bc_4415_a35c_1d3d8de722dd.slice/crio-aa831b7c31029ec4c4eed89d5d60f5e5eed7fda9d19aa9765e3b11733c28b33d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfb27df9_70a5_4ef4_bd96_0257f7c5dbab.slice/crio-conmon-f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfb27df9_70a5_4ef4_bd96_0257f7c5dbab.slice/crio-f51486f79a394c0443d109eff2aab26f71dfa7cbb5c62cd629ac0d247e20ced6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f395bca_24d1_4b23_8eb2_782713ee4d9a.slice/crio-conmon-9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-725a70fec0e560b22f96dd390cabd1f6a6ea2a1fe2c6aeb86913e9828c0fa649.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f395bca_24d1_4b23_8eb2_782713ee4d9a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice/crio-be31dcf909d58ddf97e970cfed64fb9c058f931a8dd2369408d43f5da1c2e07f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfb27df9_70a5_4ef4_bd96_0257f7c5dbab.slice/crio-d7fab97b623fccad4d291ad3dfedddcb440e48caf37ae5a7ef8d01b2e33f910f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f395bca_24d1_4b23_8eb2_782713ee4d9a.slice/crio-9ab5f50984b9f80f71577a873a35612d085dd51dddd9f5cd6b45df679229dbdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f395bca_24d1_4b23_8eb2_782713ee4d9a.slice/crio-conmon-9aa5442f4c07e34afc6678f5cf2c0ce807dee5f569c32d284a5bf8b80d8b2ed3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b369c41_886d_44cc_821b_2d415431f9ec.slice\": RecentStats: unable to find data in memory cache]" Nov 23 04:12:27 crc kubenswrapper[4751]: I1123 04:12:27.962900 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.008441 4751 generic.go:334] "Generic (PLEG): container finished" podID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerID="6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b" exitCode=0 Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.008520 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerDied","Data":"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b"} Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.008522 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb764c846-4s62f" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.008554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb764c846-4s62f" event={"ID":"3643cf0c-8c87-4a08-8f43-3c4d7883d890","Type":"ContainerDied","Data":"4bd9ca9eca2a96ac80182aa12a003b11387657a03dab4ed1b61a628b2d732af6"} Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.008577 4751 scope.go:117] "RemoveContainer" containerID="6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.014376 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerStarted","Data":"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484"} Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.014564 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.021405 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f","Type":"ContainerStarted","Data":"f38cea2e71ac97d665c834b598a3133d115b26a3877f2a0f42d2faf16f69af66"} Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.155167 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data\") pod \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.155292 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs\") pod \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.155333 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9xrt\" (UniqueName: \"kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt\") pod \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.155406 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle\") pod \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.155451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom\") pod \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\" (UID: \"3643cf0c-8c87-4a08-8f43-3c4d7883d890\") " Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.175153 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs" (OuterVolumeSpecName: "logs") pod "3643cf0c-8c87-4a08-8f43-3c4d7883d890" (UID: "3643cf0c-8c87-4a08-8f43-3c4d7883d890"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.178385 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt" (OuterVolumeSpecName: "kube-api-access-b9xrt") pod "3643cf0c-8c87-4a08-8f43-3c4d7883d890" (UID: "3643cf0c-8c87-4a08-8f43-3c4d7883d890"). InnerVolumeSpecName "kube-api-access-b9xrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.193131 4751 scope.go:117] "RemoveContainer" containerID="5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.193470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3643cf0c-8c87-4a08-8f43-3c4d7883d890" (UID: "3643cf0c-8c87-4a08-8f43-3c4d7883d890"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.219479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3643cf0c-8c87-4a08-8f43-3c4d7883d890" (UID: "3643cf0c-8c87-4a08-8f43-3c4d7883d890"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.232517 4751 scope.go:117] "RemoveContainer" containerID="6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b" Nov 23 04:12:28 crc kubenswrapper[4751]: E1123 04:12:28.232832 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b\": container with ID starting with 6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b not found: ID does not exist" containerID="6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.232861 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b"} err="failed to get container status \"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b\": rpc error: code = NotFound desc = could not find container \"6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b\": container with ID starting with 6b8d7ca53aae5b22fd688578cade718be51eed31829fc3190d811fba9daad68b not found: ID does not exist" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.232884 4751 scope.go:117] "RemoveContainer" containerID="5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17" Nov 23 04:12:28 crc kubenswrapper[4751]: E1123 04:12:28.233063 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17\": container with ID starting with 5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17 not found: ID does not exist" containerID="5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.233083 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17"} err="failed to get container status \"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17\": rpc error: code = NotFound desc = could not find container \"5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17\": container with ID starting with 5bdb2cb1a2cf4d608af23ffaa36a493724d0e8f3aee655e57209cd4ebb3afd17 not found: ID does not exist" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.249623 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data" (OuterVolumeSpecName: "config-data") pod "3643cf0c-8c87-4a08-8f43-3c4d7883d890" (UID: "3643cf0c-8c87-4a08-8f43-3c4d7883d890"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.257025 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.257055 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3643cf0c-8c87-4a08-8f43-3c4d7883d890-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.257065 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9xrt\" (UniqueName: \"kubernetes.io/projected/3643cf0c-8c87-4a08-8f43-3c4d7883d890-kube-api-access-b9xrt\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.257074 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.257083 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3643cf0c-8c87-4a08-8f43-3c4d7883d890-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.335877 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.446892067 podStartE2EDuration="7.33586266s" podCreationTimestamp="2025-11-23 04:12:21 +0000 UTC" firstStartedPulling="2025-11-23 04:12:22.921088109 +0000 UTC m=+1039.114759468" lastFinishedPulling="2025-11-23 04:12:26.810058692 +0000 UTC m=+1043.003730061" observedRunningTime="2025-11-23 04:12:28.065505176 +0000 UTC m=+1044.259176535" watchObservedRunningTime="2025-11-23 04:12:28.33586266 +0000 UTC m=+1044.529534019" Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.338145 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.346455 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6fb764c846-4s62f"] Nov 23 04:12:28 crc kubenswrapper[4751]: I1123 04:12:28.654685 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" path="/var/lib/kubelet/pods/3643cf0c-8c87-4a08-8f43-3c4d7883d890/volumes" Nov 23 04:12:29 crc kubenswrapper[4751]: I1123 04:12:29.034603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f","Type":"ContainerStarted","Data":"f429665585fa462a02e9e74b64fc9829cee537e2c5e3e9d6cf173ebe68b86594"} Nov 23 04:12:29 crc kubenswrapper[4751]: I1123 04:12:29.034655 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 04:12:29 crc kubenswrapper[4751]: I1123 04:12:29.062138 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.062120361 podStartE2EDuration="4.062120361s" podCreationTimestamp="2025-11-23 04:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:29.059083471 +0000 UTC m=+1045.252754830" watchObservedRunningTime="2025-11-23 04:12:29.062120361 +0000 UTC m=+1045.255791720" Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.060053 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce82842-e359-4824-abb2-6c652caf36ca" containerID="9479c96ecd501a93127a0cba95158d6fc07b3db5c33697a330901fef10a6c446" exitCode=0 Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.060135 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerDied","Data":"9479c96ecd501a93127a0cba95158d6fc07b3db5c33697a330901fef10a6c446"} Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.451676 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.531678 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.572793 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.573561 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="dnsmasq-dns" containerID="cri-o://539108f2455714d29035d73da756a01b46a0859d989aea077908301113d08039" gracePeriod=10 Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.610427 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:31 crc kubenswrapper[4751]: I1123 04:12:31.786463 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.016535 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070317 4751 generic.go:334] "Generic (PLEG): container finished" podID="542a9266-a029-4159-9512-8e10600dfb46" containerID="539108f2455714d29035d73da756a01b46a0859d989aea077908301113d08039" exitCode=0 Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070411 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" event={"ID":"542a9266-a029-4159-9512-8e10600dfb46","Type":"ContainerDied","Data":"539108f2455714d29035d73da756a01b46a0859d989aea077908301113d08039"} Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070464 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" event={"ID":"542a9266-a029-4159-9512-8e10600dfb46","Type":"ContainerDied","Data":"9059e324844cb1ecb567173a97ded2070f0c6015313ad2d97b2a1375c76a54fb"} Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070482 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9059e324844cb1ecb567173a97ded2070f0c6015313ad2d97b2a1375c76a54fb" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070526 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="cinder-scheduler" containerID="cri-o://2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2" gracePeriod=30 Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.070646 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="probe" containerID="cri-o://2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26" gracePeriod=30 Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.085521 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.145107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.145262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.145448 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzxhl\" (UniqueName: \"kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.145685 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.146285 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.146338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0\") pod \"542a9266-a029-4159-9512-8e10600dfb46\" (UID: \"542a9266-a029-4159-9512-8e10600dfb46\") " Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.177639 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl" (OuterVolumeSpecName: "kube-api-access-jzxhl") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "kube-api-access-jzxhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.204964 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.225993 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.226986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.237978 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.240813 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config" (OuterVolumeSpecName: "config") pod "542a9266-a029-4159-9512-8e10600dfb46" (UID: "542a9266-a029-4159-9512-8e10600dfb46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248549 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzxhl\" (UniqueName: \"kubernetes.io/projected/542a9266-a029-4159-9512-8e10600dfb46-kube-api-access-jzxhl\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248573 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248584 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248594 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248602 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:32 crc kubenswrapper[4751]: I1123 04:12:32.248610 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542a9266-a029-4159-9512-8e10600dfb46-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:33 crc kubenswrapper[4751]: I1123 04:12:33.081394 4751 generic.go:334] "Generic (PLEG): container finished" podID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerID="2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26" exitCode=0 Nov 23 04:12:33 crc kubenswrapper[4751]: I1123 04:12:33.085290 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-v6fg9" Nov 23 04:12:33 crc kubenswrapper[4751]: I1123 04:12:33.081499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerDied","Data":"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26"} Nov 23 04:12:33 crc kubenswrapper[4751]: I1123 04:12:33.114395 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:33 crc kubenswrapper[4751]: I1123 04:12:33.120799 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-v6fg9"] Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.530298 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.667877 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="542a9266-a029-4159-9512-8e10600dfb46" path="/var/lib/kubelet/pods/542a9266-a029-4159-9512-8e10600dfb46/volumes" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.782198 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-66b57bb577-p2b4n" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.906653 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 04:12:34 crc kubenswrapper[4751]: E1123 04:12:34.906981 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api-log" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.906999 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api-log" Nov 23 04:12:34 crc kubenswrapper[4751]: E1123 04:12:34.907010 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="dnsmasq-dns" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907016 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="dnsmasq-dns" Nov 23 04:12:34 crc kubenswrapper[4751]: E1123 04:12:34.907029 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907035 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" Nov 23 04:12:34 crc kubenswrapper[4751]: E1123 04:12:34.907060 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="init" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907065 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="init" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907233 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="542a9266-a029-4159-9512-8e10600dfb46" containerName="dnsmasq-dns" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907257 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.907271 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3643cf0c-8c87-4a08-8f43-3c4d7883d890" containerName="barbican-api-log" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.911924 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.914064 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bvzx4" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.914629 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.918005 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.918924 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.996621 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwbp\" (UniqueName: \"kubernetes.io/projected/d8f1f72f-cb69-43a3-8f06-1f348a731330-kube-api-access-cfwbp\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.996704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.996780 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:34 crc kubenswrapper[4751]: I1123 04:12:34.996850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.098975 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwbp\" (UniqueName: \"kubernetes.io/projected/d8f1f72f-cb69-43a3-8f06-1f348a731330-kube-api-access-cfwbp\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.099067 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.099176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.099273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.101025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.107381 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.117767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwbp\" (UniqueName: \"kubernetes.io/projected/d8f1f72f-cb69-43a3-8f06-1f348a731330-kube-api-access-cfwbp\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.119746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d8f1f72f-cb69-43a3-8f06-1f348a731330-openstack-config-secret\") pod \"openstackclient\" (UID: \"d8f1f72f-cb69-43a3-8f06-1f348a731330\") " pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.233178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.598392 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.709315 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.710108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnsr4\" (UniqueName: \"kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.710205 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.710263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.710376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.710483 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id\") pod \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\" (UID: \"bd392cc8-4cbd-4f32-82f3-9943ca7133e4\") " Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.711487 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.714833 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.724393 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts" (OuterVolumeSpecName: "scripts") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.740825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4" (OuterVolumeSpecName: "kube-api-access-lnsr4") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "kube-api-access-lnsr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.758860 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.786772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.797916 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.811482 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data" (OuterVolumeSpecName: "config-data") pod "bd392cc8-4cbd-4f32-82f3-9943ca7133e4" (UID: "bd392cc8-4cbd-4f32-82f3-9943ca7133e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813780 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnsr4\" (UniqueName: \"kubernetes.io/projected/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-kube-api-access-lnsr4\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813808 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813818 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813827 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813835 4751 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.813842 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd392cc8-4cbd-4f32-82f3-9943ca7133e4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:35 crc kubenswrapper[4751]: I1123 04:12:35.934097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d9c6b99fd-4x95v" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.118276 4751 generic.go:334] "Generic (PLEG): container finished" podID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerID="2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2" exitCode=0 Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.118323 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerDied","Data":"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2"} Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.118472 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bd392cc8-4cbd-4f32-82f3-9943ca7133e4","Type":"ContainerDied","Data":"5c4d3a945fe23ddd046485bc7929343b530f755cd309101e730a94991e93ad32"} Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.118492 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.118499 4751 scope.go:117] "RemoveContainer" containerID="2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.121176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d8f1f72f-cb69-43a3-8f06-1f348a731330","Type":"ContainerStarted","Data":"4fdea9aced1c23efa4060dd7a2a37cdb68f612cec8c1353d8a57fca7a3f3b0fa"} Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.175375 4751 scope.go:117] "RemoveContainer" containerID="2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.191393 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.203437 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.206956 4751 scope.go:117] "RemoveContainer" containerID="2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26" Nov 23 04:12:36 crc kubenswrapper[4751]: E1123 04:12:36.208147 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26\": container with ID starting with 2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26 not found: ID does not exist" containerID="2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.208194 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26"} err="failed to get container status \"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26\": rpc error: code = NotFound desc = could not find container \"2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26\": container with ID starting with 2b04fd442471ca553391a7b16557dcb3263cf07de4153d3af184225a9358bc26 not found: ID does not exist" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.208221 4751 scope.go:117] "RemoveContainer" containerID="2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2" Nov 23 04:12:36 crc kubenswrapper[4751]: E1123 04:12:36.211775 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2\": container with ID starting with 2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2 not found: ID does not exist" containerID="2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.211816 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2"} err="failed to get container status \"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2\": rpc error: code = NotFound desc = could not find container \"2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2\": container with ID starting with 2c0d824634f442fac25b8948309e7d0343450297f6a48dafefe4f64c59d073d2 not found: ID does not exist" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.214720 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:36 crc kubenswrapper[4751]: E1123 04:12:36.215189 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="cinder-scheduler" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.215212 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="cinder-scheduler" Nov 23 04:12:36 crc kubenswrapper[4751]: E1123 04:12:36.215236 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="probe" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.215244 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="probe" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.215538 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="probe" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.215578 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" containerName="cinder-scheduler" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.218657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.223908 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.230585 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.328287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.328681 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.328720 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.328740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc64v\" (UniqueName: \"kubernetes.io/projected/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-kube-api-access-tc64v\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.328924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.329100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431574 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc64v\" (UniqueName: \"kubernetes.io/projected/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-kube-api-access-tc64v\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.431823 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.437796 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.437827 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.449890 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.454787 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.455189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc64v\" (UniqueName: \"kubernetes.io/projected/ba61bff1-41f9-4e95-bde0-0da7b4000a1c-kube-api-access-tc64v\") pod \"cinder-scheduler-0\" (UID: \"ba61bff1-41f9-4e95-bde0-0da7b4000a1c\") " pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.551888 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 04:12:36 crc kubenswrapper[4751]: I1123 04:12:36.670894 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd392cc8-4cbd-4f32-82f3-9943ca7133e4" path="/var/lib/kubelet/pods/bd392cc8-4cbd-4f32-82f3-9943ca7133e4/volumes" Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.034796 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 04:12:37 crc kubenswrapper[4751]: W1123 04:12:37.042514 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba61bff1_41f9_4e95_bde0_0da7b4000a1c.slice/crio-2fc3b0ec56a6bd684fa75aa7afeb5480f560a86bc62d1779d2e5ee59394cfb1d WatchSource:0}: Error finding container 2fc3b0ec56a6bd684fa75aa7afeb5480f560a86bc62d1779d2e5ee59394cfb1d: Status 404 returned error can't find the container with id 2fc3b0ec56a6bd684fa75aa7afeb5480f560a86bc62d1779d2e5ee59394cfb1d Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.066770 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64b84b8669-6xvhn" Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.142468 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.142783 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78fd8b465b-jwbnc" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-api" containerID="cri-o://47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8" gracePeriod=30 Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.143250 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78fd8b465b-jwbnc" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-httpd" containerID="cri-o://ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd" gracePeriod=30 Nov 23 04:12:37 crc kubenswrapper[4751]: I1123 04:12:37.153338 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ba61bff1-41f9-4e95-bde0-0da7b4000a1c","Type":"ContainerStarted","Data":"2fc3b0ec56a6bd684fa75aa7afeb5480f560a86bc62d1779d2e5ee59394cfb1d"} Nov 23 04:12:38 crc kubenswrapper[4751]: I1123 04:12:38.175708 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerID="ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd" exitCode=0 Nov 23 04:12:38 crc kubenswrapper[4751]: I1123 04:12:38.175811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerDied","Data":"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd"} Nov 23 04:12:38 crc kubenswrapper[4751]: I1123 04:12:38.186983 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ba61bff1-41f9-4e95-bde0-0da7b4000a1c","Type":"ContainerStarted","Data":"9389eab1f0a2f6832d16cf1ca4fbc78b30fce2cc0d7da57e2f395f62ffc9f1d2"} Nov 23 04:12:38 crc kubenswrapper[4751]: I1123 04:12:38.600586 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.198579 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ba61bff1-41f9-4e95-bde0-0da7b4000a1c","Type":"ContainerStarted","Data":"0d9a9c07e26227405834e7be43a118b4c2884355a6cd5b01c08ae454ca4d9634"} Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.247759 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.247741568 podStartE2EDuration="3.247741568s" podCreationTimestamp="2025-11-23 04:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:39.222974764 +0000 UTC m=+1055.416646123" watchObservedRunningTime="2025-11-23 04:12:39.247741568 +0000 UTC m=+1055.441412927" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.251458 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-755d45c5-9j9lj"] Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.253195 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.256201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.256321 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.257014 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.271479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-755d45c5-9j9lj"] Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.402836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-combined-ca-bundle\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.402888 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89b5\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-kube-api-access-x89b5\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.402947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-internal-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.402967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-log-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.403054 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-public-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.403082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-etc-swift\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.403107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-run-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.403129 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-config-data\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-combined-ca-bundle\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89b5\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-kube-api-access-x89b5\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-internal-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504899 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-log-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-public-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.504992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-etc-swift\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.505017 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-config-data\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.505030 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-run-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.505586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-run-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.506371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-log-httpd\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.514061 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-internal-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.514649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-etc-swift\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.519664 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-combined-ca-bundle\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.519894 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-public-tls-certs\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.520674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-config-data\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.522828 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89b5\" (UniqueName: \"kubernetes.io/projected/69aeb1a6-d144-470d-8b47-f4e0126bd9fb-kube-api-access-x89b5\") pod \"swift-proxy-755d45c5-9j9lj\" (UID: \"69aeb1a6-d144-470d-8b47-f4e0126bd9fb\") " pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.593990 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.955390 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.956297 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-central-agent" containerID="cri-o://76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca" gracePeriod=30 Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.956757 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="proxy-httpd" containerID="cri-o://28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484" gracePeriod=30 Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.956822 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="sg-core" containerID="cri-o://b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b" gracePeriod=30 Nov 23 04:12:39 crc kubenswrapper[4751]: I1123 04:12:39.956867 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-notification-agent" containerID="cri-o://6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1" gracePeriod=30 Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.022647 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gr4mz"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.023816 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.029494 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gr4mz"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.065188 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": read tcp 10.217.0.2:50900->10.217.0.165:3000: read: connection reset by peer" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.120957 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.121026 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxhp\" (UniqueName: \"kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.136069 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.198228 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-755d45c5-9j9lj"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.213482 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-s24br"] Nov 23 04:12:40 crc kubenswrapper[4751]: E1123 04:12:40.213837 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-httpd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.213848 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-httpd" Nov 23 04:12:40 crc kubenswrapper[4751]: E1123 04:12:40.213877 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-api" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.213885 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-api" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.214052 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-httpd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.214064 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerName="neutron-api" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.214621 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.222876 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config\") pod \"4fae0d08-08a7-49ff-aa50-5979741223b9\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.223055 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle\") pod \"4fae0d08-08a7-49ff-aa50-5979741223b9\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.223125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs\") pod \"4fae0d08-08a7-49ff-aa50-5979741223b9\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.223142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config\") pod \"4fae0d08-08a7-49ff-aa50-5979741223b9\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.223181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnl8h\" (UniqueName: \"kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h\") pod \"4fae0d08-08a7-49ff-aa50-5979741223b9\" (UID: \"4fae0d08-08a7-49ff-aa50-5979741223b9\") " Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.224851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.225193 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpxhp\" (UniqueName: \"kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.227011 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-s24br"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.227104 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.231574 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4fae0d08-08a7-49ff-aa50-5979741223b9" (UID: "4fae0d08-08a7-49ff-aa50-5979741223b9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.239850 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb989e90-55d2-4317-8466-2c724c57428e" containerID="28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484" exitCode=0 Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.239884 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb989e90-55d2-4317-8466-2c724c57428e" containerID="b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b" exitCode=2 Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.239948 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerDied","Data":"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484"} Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.239975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerDied","Data":"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b"} Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.239975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h" (OuterVolumeSpecName: "kube-api-access-dnl8h") pod "4fae0d08-08a7-49ff-aa50-5979741223b9" (UID: "4fae0d08-08a7-49ff-aa50-5979741223b9"). InnerVolumeSpecName "kube-api-access-dnl8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.244928 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fae0d08-08a7-49ff-aa50-5979741223b9" containerID="47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8" exitCode=0 Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.245545 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78fd8b465b-jwbnc" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.246264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerDied","Data":"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8"} Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.246294 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78fd8b465b-jwbnc" event={"ID":"4fae0d08-08a7-49ff-aa50-5979741223b9","Type":"ContainerDied","Data":"a3f38d203fbe99e0c069cea59a8ce292fb5c592e04b58352457ec2baa1291805"} Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.246312 4751 scope.go:117] "RemoveContainer" containerID="ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.254201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpxhp\" (UniqueName: \"kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp\") pod \"nova-api-db-create-gr4mz\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.264100 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7892-account-create-5nvvr"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.268117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.273100 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.294653 4751 scope.go:117] "RemoveContainer" containerID="47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.295207 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config" (OuterVolumeSpecName: "config") pod "4fae0d08-08a7-49ff-aa50-5979741223b9" (UID: "4fae0d08-08a7-49ff-aa50-5979741223b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.296425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7892-account-create-5nvvr"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.305782 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fae0d08-08a7-49ff-aa50-5979741223b9" (UID: "4fae0d08-08a7-49ff-aa50-5979741223b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.330338 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6gc\" (UniqueName: \"kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.330430 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.330675 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.330688 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.344964 4751 scope.go:117] "RemoveContainer" containerID="ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd" Nov 23 04:12:40 crc kubenswrapper[4751]: E1123 04:12:40.345704 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd\": container with ID starting with ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd not found: ID does not exist" containerID="ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.345768 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd"} err="failed to get container status \"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd\": rpc error: code = NotFound desc = could not find container \"ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd\": container with ID starting with ca971b6080f4ec6cd9d6fa9c11a41f6a29504b1dc2a6069e392f34fe7d27bdbd not found: ID does not exist" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.345856 4751 scope.go:117] "RemoveContainer" containerID="47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.330697 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnl8h\" (UniqueName: \"kubernetes.io/projected/4fae0d08-08a7-49ff-aa50-5979741223b9-kube-api-access-dnl8h\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.348086 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:40 crc kubenswrapper[4751]: E1123 04:12:40.349001 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8\": container with ID starting with 47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8 not found: ID does not exist" containerID="47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.349024 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8"} err="failed to get container status \"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8\": rpc error: code = NotFound desc = could not find container \"47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8\": container with ID starting with 47cb28cbc514a18b8fe7a647dac03ae4eee3ebd9e19e76d3a1b2d5830577f4e8 not found: ID does not exist" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.355494 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-grvjd"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.357858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.366912 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4fae0d08-08a7-49ff-aa50-5979741223b9" (UID: "4fae0d08-08a7-49ff-aa50-5979741223b9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.386727 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-grvjd"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.416874 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1594-account-create-98rkq"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.418093 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.421856 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.431702 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.432364 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1594-account-create-98rkq"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.449846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc6gc\" (UniqueName: \"kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.449891 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.449920 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7kg\" (UniqueName: \"kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.449965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.450002 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrf6\" (UniqueName: \"kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.450032 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.450126 4751 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fae0d08-08a7-49ff-aa50-5979741223b9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.456178 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.468157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc6gc\" (UniqueName: \"kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc\") pod \"nova-cell0-db-create-s24br\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.545066 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551299 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551424 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7kg\" (UniqueName: \"kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551451 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwbg\" (UniqueName: \"kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551484 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.551517 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrf6\" (UniqueName: \"kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.552320 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.553115 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.568829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrf6\" (UniqueName: \"kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6\") pod \"nova-cell1-db-create-grvjd\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.573781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7kg\" (UniqueName: \"kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg\") pod \"nova-api-7892-account-create-5nvvr\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.595713 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.637849 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-49c1-account-create-7k8mk"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.639179 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.645587 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.656227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.656368 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwbg\" (UniqueName: \"kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.657217 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.664795 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49c1-account-create-7k8mk"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.664823 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.670709 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-78fd8b465b-jwbnc"] Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.677215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwbg\" (UniqueName: \"kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg\") pod \"nova-cell0-1594-account-create-98rkq\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.679239 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.754416 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.759304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.759508 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8tp\" (UniqueName: \"kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.861283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.861472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8tp\" (UniqueName: \"kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.863148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.879094 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8tp\" (UniqueName: \"kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp\") pod \"nova-cell1-49c1-account-create-7k8mk\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:40 crc kubenswrapper[4751]: I1123 04:12:40.905212 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gr4mz"] Nov 23 04:12:40 crc kubenswrapper[4751]: W1123 04:12:40.918940 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d838efd_6302_4350_a91d_a04a9c37f699.slice/crio-7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4 WatchSource:0}: Error finding container 7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4: Status 404 returned error can't find the container with id 7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4 Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.105078 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.161505 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-grvjd"] Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.172026 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7892-account-create-5nvvr"] Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.266620 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-grvjd" event={"ID":"dcd9da06-fe98-41d1-8e37-8207665dad25","Type":"ContainerStarted","Data":"40b7eaaec0452d6ad0139ff24d7d49c8af687aaadb56b371b9b787fb35de69d6"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.273465 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7892-account-create-5nvvr" event={"ID":"86e06221-29e8-46f0-9cb2-5c1dfcd166cd","Type":"ContainerStarted","Data":"994d04d7a37a7481d6496a7a661c3c097a676bb30bcfb70a569b52291984e30d"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.274950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-755d45c5-9j9lj" event={"ID":"69aeb1a6-d144-470d-8b47-f4e0126bd9fb","Type":"ContainerStarted","Data":"6c534b3fe4f9606d3d6d3e2bfe03349c32bc7be74c3b1f973a04788b8d2e458c"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.274968 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-755d45c5-9j9lj" event={"ID":"69aeb1a6-d144-470d-8b47-f4e0126bd9fb","Type":"ContainerStarted","Data":"f3418c4f6781926140c27f8bef0080e59b93332f60846ca911757f634bebaa01"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.274978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-755d45c5-9j9lj" event={"ID":"69aeb1a6-d144-470d-8b47-f4e0126bd9fb","Type":"ContainerStarted","Data":"e4175eaa4179e6384e5e3f2bb61f3ed0ea0f071ea78655a50d00c209f3b164ed"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.276214 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.276240 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.288192 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb989e90-55d2-4317-8466-2c724c57428e" containerID="76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca" exitCode=0 Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.288266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerDied","Data":"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.297232 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-755d45c5-9j9lj" podStartSLOduration=2.297214034 podStartE2EDuration="2.297214034s" podCreationTimestamp="2025-11-23 04:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:41.295098738 +0000 UTC m=+1057.488770097" watchObservedRunningTime="2025-11-23 04:12:41.297214034 +0000 UTC m=+1057.490885393" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.306595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gr4mz" event={"ID":"5d838efd-6302-4350-a91d-a04a9c37f699","Type":"ContainerStarted","Data":"b9b02b6d677c06af8c748630796908ef7a4c3dcdb806df3c262da5dbeddbb6ea"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.306642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gr4mz" event={"ID":"5d838efd-6302-4350-a91d-a04a9c37f699","Type":"ContainerStarted","Data":"7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4"} Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.332444 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-s24br"] Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.343712 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-gr4mz" podStartSLOduration=2.343697542 podStartE2EDuration="2.343697542s" podCreationTimestamp="2025-11-23 04:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:12:41.324757132 +0000 UTC m=+1057.518428491" watchObservedRunningTime="2025-11-23 04:12:41.343697542 +0000 UTC m=+1057.537368901" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.396107 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1594-account-create-98rkq"] Nov 23 04:12:41 crc kubenswrapper[4751]: W1123 04:12:41.436296 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b32816c_6aea_448d_bc18_38a3206ae2d3.slice/crio-b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3 WatchSource:0}: Error finding container b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3: Status 404 returned error can't find the container with id b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3 Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.554444 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.591463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49c1-account-create-7k8mk"] Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.874545 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999586 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999615 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999664 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999754 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999774 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999837 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf9km\" (UniqueName: \"kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:41 crc kubenswrapper[4751]: I1123 04:12:41.999862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts\") pod \"eb989e90-55d2-4317-8466-2c724c57428e\" (UID: \"eb989e90-55d2-4317-8466-2c724c57428e\") " Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.001179 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.001372 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.006886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts" (OuterVolumeSpecName: "scripts") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.011474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km" (OuterVolumeSpecName: "kube-api-access-sf9km") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "kube-api-access-sf9km". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.015892 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.093297 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.105637 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf9km\" (UniqueName: \"kubernetes.io/projected/eb989e90-55d2-4317-8466-2c724c57428e-kube-api-access-sf9km\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.105693 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.105709 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.105719 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.105730 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb989e90-55d2-4317-8466-2c724c57428e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.258232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.316237 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.337479 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data" (OuterVolumeSpecName: "config-data") pod "eb989e90-55d2-4317-8466-2c724c57428e" (UID: "eb989e90-55d2-4317-8466-2c724c57428e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.340499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1594-account-create-98rkq" event={"ID":"1b32816c-6aea-448d-bc18-38a3206ae2d3","Type":"ContainerStarted","Data":"2e6b30985f56bd4b211e2b2cb029fd41c57945cddd96e43ddc655e22bb7a9e68"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.340542 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1594-account-create-98rkq" event={"ID":"1b32816c-6aea-448d-bc18-38a3206ae2d3","Type":"ContainerStarted","Data":"b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.362606 4751 generic.go:334] "Generic (PLEG): container finished" podID="c9e20f20-d357-4381-b150-bd0ca23cee86" containerID="90b46d60984a3cb178bc04fc5c8d7866ccea8b20f7ae27a3ad17072d64e1bad4" exitCode=0 Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.363065 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-s24br" event={"ID":"c9e20f20-d357-4381-b150-bd0ca23cee86","Type":"ContainerDied","Data":"90b46d60984a3cb178bc04fc5c8d7866ccea8b20f7ae27a3ad17072d64e1bad4"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.363094 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-s24br" event={"ID":"c9e20f20-d357-4381-b150-bd0ca23cee86","Type":"ContainerStarted","Data":"ccceea9f0880fa92490b51cc608d8f742473a4f67300be1f9580ddeeac284fc9"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.377484 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb989e90-55d2-4317-8466-2c724c57428e" containerID="6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1" exitCode=0 Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.377547 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerDied","Data":"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.377586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb989e90-55d2-4317-8466-2c724c57428e","Type":"ContainerDied","Data":"93d8e921bc0758f7542cd499c3e8dfcc45ff90d8e29a6c552b39eec943feaf2c"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.377604 4751 scope.go:117] "RemoveContainer" containerID="28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.377715 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.388262 4751 generic.go:334] "Generic (PLEG): container finished" podID="5d838efd-6302-4350-a91d-a04a9c37f699" containerID="b9b02b6d677c06af8c748630796908ef7a4c3dcdb806df3c262da5dbeddbb6ea" exitCode=0 Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.388376 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gr4mz" event={"ID":"5d838efd-6302-4350-a91d-a04a9c37f699","Type":"ContainerDied","Data":"b9b02b6d677c06af8c748630796908ef7a4c3dcdb806df3c262da5dbeddbb6ea"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.390875 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49c1-account-create-7k8mk" event={"ID":"5d4e0e01-0fde-4d72-8693-f3ac9edde707","Type":"ContainerStarted","Data":"fd8489292d0f781c5e544fc6e1320ed42eb9765bcdb5348c1e2f7fe758c9d02f"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.392173 4751 generic.go:334] "Generic (PLEG): container finished" podID="dcd9da06-fe98-41d1-8e37-8207665dad25" containerID="57f3ee99f361b849510efc691b8eb2ffffd155cdc5b5457dc8c3c0d9d5761a8b" exitCode=0 Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.392230 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-grvjd" event={"ID":"dcd9da06-fe98-41d1-8e37-8207665dad25","Type":"ContainerDied","Data":"57f3ee99f361b849510efc691b8eb2ffffd155cdc5b5457dc8c3c0d9d5761a8b"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.394318 4751 generic.go:334] "Generic (PLEG): container finished" podID="86e06221-29e8-46f0-9cb2-5c1dfcd166cd" containerID="7b7204f054ca8264d205087aef66a1eb3b534994228996dd4a8f8d5bcfdb8d64" exitCode=0 Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.395199 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7892-account-create-5nvvr" event={"ID":"86e06221-29e8-46f0-9cb2-5c1dfcd166cd","Type":"ContainerDied","Data":"7b7204f054ca8264d205087aef66a1eb3b534994228996dd4a8f8d5bcfdb8d64"} Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.409618 4751 scope.go:117] "RemoveContainer" containerID="b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.437145 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb989e90-55d2-4317-8466-2c724c57428e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.470424 4751 scope.go:117] "RemoveContainer" containerID="6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.504473 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.522094 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545168 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.545622 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="proxy-httpd" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545648 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="proxy-httpd" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.545666 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="sg-core" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545674 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="sg-core" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.545692 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-central-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545699 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-central-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.545708 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-notification-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545714 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-notification-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545894 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-notification-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545902 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="ceilometer-central-agent" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545923 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="sg-core" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.545939 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb989e90-55d2-4317-8466-2c724c57428e" containerName="proxy-httpd" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.547631 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.549148 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.549396 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.552956 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.554924 4751 scope.go:117] "RemoveContainer" containerID="76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.594759 4751 scope.go:117] "RemoveContainer" containerID="28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.598657 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484\": container with ID starting with 28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484 not found: ID does not exist" containerID="28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.598690 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484"} err="failed to get container status \"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484\": rpc error: code = NotFound desc = could not find container \"28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484\": container with ID starting with 28969cddd1a1f94e96ef95c9e17109682cc32a78ed6b2e93d0c4a2ef19bf0484 not found: ID does not exist" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.598710 4751 scope.go:117] "RemoveContainer" containerID="b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.599263 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b\": container with ID starting with b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b not found: ID does not exist" containerID="b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.599294 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b"} err="failed to get container status \"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b\": rpc error: code = NotFound desc = could not find container \"b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b\": container with ID starting with b465114382aa8d883fd5478db7a5ca6c50278f1a91990080b7e313b578eecf7b not found: ID does not exist" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.599312 4751 scope.go:117] "RemoveContainer" containerID="6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.599691 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1\": container with ID starting with 6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1 not found: ID does not exist" containerID="6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.599720 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1"} err="failed to get container status \"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1\": rpc error: code = NotFound desc = could not find container \"6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1\": container with ID starting with 6c1a3c92ede870f3d81617c7d1728e05372fc437e196e91195903939f149f6a1 not found: ID does not exist" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.599735 4751 scope.go:117] "RemoveContainer" containerID="76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca" Nov 23 04:12:42 crc kubenswrapper[4751]: E1123 04:12:42.600636 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca\": container with ID starting with 76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca not found: ID does not exist" containerID="76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.600668 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca"} err="failed to get container status \"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca\": rpc error: code = NotFound desc = could not find container \"76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca\": container with ID starting with 76f8800b13e01cc8abb53f8f3193e7ad4fd3991ba178443d47242c7417e437ca not found: ID does not exist" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.657613 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fae0d08-08a7-49ff-aa50-5979741223b9" path="/var/lib/kubelet/pods/4fae0d08-08a7-49ff-aa50-5979741223b9/volumes" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.658576 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb989e90-55d2-4317-8466-2c724c57428e" path="/var/lib/kubelet/pods/eb989e90-55d2-4317-8466-2c724c57428e/volumes" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.746444 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747022 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kxc\" (UniqueName: \"kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.747298 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848786 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848819 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7kxc\" (UniqueName: \"kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.848841 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.849287 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.849530 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.859468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.861267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.861759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.863175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.871545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7kxc\" (UniqueName: \"kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc\") pod \"ceilometer-0\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " pod="openstack/ceilometer-0" Nov 23 04:12:42 crc kubenswrapper[4751]: I1123 04:12:42.884384 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.406879 4751 generic.go:334] "Generic (PLEG): container finished" podID="5d4e0e01-0fde-4d72-8693-f3ac9edde707" containerID="ecbd06a2d2acab5fd51b7c394f05aff59c3893043fe24734404a3e72e6ef98ba" exitCode=0 Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.406937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49c1-account-create-7k8mk" event={"ID":"5d4e0e01-0fde-4d72-8693-f3ac9edde707","Type":"ContainerDied","Data":"ecbd06a2d2acab5fd51b7c394f05aff59c3893043fe24734404a3e72e6ef98ba"} Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.409796 4751 generic.go:334] "Generic (PLEG): container finished" podID="1b32816c-6aea-448d-bc18-38a3206ae2d3" containerID="2e6b30985f56bd4b211e2b2cb029fd41c57945cddd96e43ddc655e22bb7a9e68" exitCode=0 Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.409863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1594-account-create-98rkq" event={"ID":"1b32816c-6aea-448d-bc18-38a3206ae2d3","Type":"ContainerDied","Data":"2e6b30985f56bd4b211e2b2cb029fd41c57945cddd96e43ddc655e22bb7a9e68"} Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.500794 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:43 crc kubenswrapper[4751]: W1123 04:12:43.557206 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a09e742_94e6_4866_9d00_27af448b763e.slice/crio-cda3ee39f68a0ea4e8057d08cba0c66c330090c65c0b53268285a0681079dd0e WatchSource:0}: Error finding container cda3ee39f68a0ea4e8057d08cba0c66c330090c65c0b53268285a0681079dd0e: Status 404 returned error can't find the container with id cda3ee39f68a0ea4e8057d08cba0c66c330090c65c0b53268285a0681079dd0e Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.922845 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.982279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts\") pod \"5d838efd-6302-4350-a91d-a04a9c37f699\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.982645 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpxhp\" (UniqueName: \"kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp\") pod \"5d838efd-6302-4350-a91d-a04a9c37f699\" (UID: \"5d838efd-6302-4350-a91d-a04a9c37f699\") " Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.984906 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d838efd-6302-4350-a91d-a04a9c37f699" (UID: "5d838efd-6302-4350-a91d-a04a9c37f699"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:43 crc kubenswrapper[4751]: I1123 04:12:43.989769 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp" (OuterVolumeSpecName: "kube-api-access-bpxhp") pod "5d838efd-6302-4350-a91d-a04a9c37f699" (UID: "5d838efd-6302-4350-a91d-a04a9c37f699"). InnerVolumeSpecName "kube-api-access-bpxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.048094 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.057814 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.068115 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084010 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc6gc\" (UniqueName: \"kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc\") pod \"c9e20f20-d357-4381-b150-bd0ca23cee86\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084060 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts\") pod \"dcd9da06-fe98-41d1-8e37-8207665dad25\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts\") pod \"c9e20f20-d357-4381-b150-bd0ca23cee86\" (UID: \"c9e20f20-d357-4381-b150-bd0ca23cee86\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084235 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts\") pod \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084269 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt7kg\" (UniqueName: \"kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg\") pod \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\" (UID: \"86e06221-29e8-46f0-9cb2-5c1dfcd166cd\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.084578 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrf6\" (UniqueName: \"kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6\") pod \"dcd9da06-fe98-41d1-8e37-8207665dad25\" (UID: \"dcd9da06-fe98-41d1-8e37-8207665dad25\") " Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.085334 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcd9da06-fe98-41d1-8e37-8207665dad25" (UID: "dcd9da06-fe98-41d1-8e37-8207665dad25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.085742 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcd9da06-fe98-41d1-8e37-8207665dad25-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.085763 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpxhp\" (UniqueName: \"kubernetes.io/projected/5d838efd-6302-4350-a91d-a04a9c37f699-kube-api-access-bpxhp\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.085774 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d838efd-6302-4350-a91d-a04a9c37f699-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.085998 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9e20f20-d357-4381-b150-bd0ca23cee86" (UID: "c9e20f20-d357-4381-b150-bd0ca23cee86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.086173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86e06221-29e8-46f0-9cb2-5c1dfcd166cd" (UID: "86e06221-29e8-46f0-9cb2-5c1dfcd166cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.092511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg" (OuterVolumeSpecName: "kube-api-access-pt7kg") pod "86e06221-29e8-46f0-9cb2-5c1dfcd166cd" (UID: "86e06221-29e8-46f0-9cb2-5c1dfcd166cd"). InnerVolumeSpecName "kube-api-access-pt7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.092637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc" (OuterVolumeSpecName: "kube-api-access-pc6gc") pod "c9e20f20-d357-4381-b150-bd0ca23cee86" (UID: "c9e20f20-d357-4381-b150-bd0ca23cee86"). InnerVolumeSpecName "kube-api-access-pc6gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.092716 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6" (OuterVolumeSpecName: "kube-api-access-vcrf6") pod "dcd9da06-fe98-41d1-8e37-8207665dad25" (UID: "dcd9da06-fe98-41d1-8e37-8207665dad25"). InnerVolumeSpecName "kube-api-access-vcrf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.187075 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcrf6\" (UniqueName: \"kubernetes.io/projected/dcd9da06-fe98-41d1-8e37-8207665dad25-kube-api-access-vcrf6\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.187109 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc6gc\" (UniqueName: \"kubernetes.io/projected/c9e20f20-d357-4381-b150-bd0ca23cee86-kube-api-access-pc6gc\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.187119 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e20f20-d357-4381-b150-bd0ca23cee86-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.187130 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.187139 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt7kg\" (UniqueName: \"kubernetes.io/projected/86e06221-29e8-46f0-9cb2-5c1dfcd166cd-kube-api-access-pt7kg\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.423533 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7892-account-create-5nvvr" event={"ID":"86e06221-29e8-46f0-9cb2-5c1dfcd166cd","Type":"ContainerDied","Data":"994d04d7a37a7481d6496a7a661c3c097a676bb30bcfb70a569b52291984e30d"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.423810 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994d04d7a37a7481d6496a7a661c3c097a676bb30bcfb70a569b52291984e30d" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.423561 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7892-account-create-5nvvr" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.426078 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-s24br" event={"ID":"c9e20f20-d357-4381-b150-bd0ca23cee86","Type":"ContainerDied","Data":"ccceea9f0880fa92490b51cc608d8f742473a4f67300be1f9580ddeeac284fc9"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.426109 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccceea9f0880fa92490b51cc608d8f742473a4f67300be1f9580ddeeac284fc9" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.426158 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-s24br" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.437996 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerStarted","Data":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.438037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerStarted","Data":"cda3ee39f68a0ea4e8057d08cba0c66c330090c65c0b53268285a0681079dd0e"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.439966 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gr4mz" event={"ID":"5d838efd-6302-4350-a91d-a04a9c37f699","Type":"ContainerDied","Data":"7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.439990 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5db96555cbdde37533d2b5fa1c3a8ee683ecedb9e8965d41c664d68438b9f4" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.440042 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gr4mz" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.455840 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-grvjd" Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.456183 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-grvjd" event={"ID":"dcd9da06-fe98-41d1-8e37-8207665dad25","Type":"ContainerDied","Data":"40b7eaaec0452d6ad0139ff24d7d49c8af687aaadb56b371b9b787fb35de69d6"} Nov 23 04:12:44 crc kubenswrapper[4751]: I1123 04:12:44.456309 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b7eaaec0452d6ad0139ff24d7d49c8af687aaadb56b371b9b787fb35de69d6" Nov 23 04:12:46 crc kubenswrapper[4751]: I1123 04:12:46.904038 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 04:12:49 crc kubenswrapper[4751]: I1123 04:12:49.604262 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:49 crc kubenswrapper[4751]: I1123 04:12:49.605753 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-755d45c5-9j9lj" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.228266 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.324681 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.423655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8tp\" (UniqueName: \"kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp\") pod \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.423706 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts\") pod \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\" (UID: \"5d4e0e01-0fde-4d72-8693-f3ac9edde707\") " Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.423818 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts\") pod \"1b32816c-6aea-448d-bc18-38a3206ae2d3\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.423864 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwbg\" (UniqueName: \"kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg\") pod \"1b32816c-6aea-448d-bc18-38a3206ae2d3\" (UID: \"1b32816c-6aea-448d-bc18-38a3206ae2d3\") " Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.424504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b32816c-6aea-448d-bc18-38a3206ae2d3" (UID: "1b32816c-6aea-448d-bc18-38a3206ae2d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.424589 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d4e0e01-0fde-4d72-8693-f3ac9edde707" (UID: "5d4e0e01-0fde-4d72-8693-f3ac9edde707"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.429267 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg" (OuterVolumeSpecName: "kube-api-access-cpwbg") pod "1b32816c-6aea-448d-bc18-38a3206ae2d3" (UID: "1b32816c-6aea-448d-bc18-38a3206ae2d3"). InnerVolumeSpecName "kube-api-access-cpwbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.429577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp" (OuterVolumeSpecName: "kube-api-access-mf8tp") pod "5d4e0e01-0fde-4d72-8693-f3ac9edde707" (UID: "5d4e0e01-0fde-4d72-8693-f3ac9edde707"). InnerVolumeSpecName "kube-api-access-mf8tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.526020 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8tp\" (UniqueName: \"kubernetes.io/projected/5d4e0e01-0fde-4d72-8693-f3ac9edde707-kube-api-access-mf8tp\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.526065 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4e0e01-0fde-4d72-8693-f3ac9edde707-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.526081 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b32816c-6aea-448d-bc18-38a3206ae2d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.526095 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwbg\" (UniqueName: \"kubernetes.io/projected/1b32816c-6aea-448d-bc18-38a3206ae2d3-kube-api-access-cpwbg\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.558776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49c1-account-create-7k8mk" event={"ID":"5d4e0e01-0fde-4d72-8693-f3ac9edde707","Type":"ContainerDied","Data":"fd8489292d0f781c5e544fc6e1320ed42eb9765bcdb5348c1e2f7fe758c9d02f"} Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.558990 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd8489292d0f781c5e544fc6e1320ed42eb9765bcdb5348c1e2f7fe758c9d02f" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.558793 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49c1-account-create-7k8mk" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.560988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1594-account-create-98rkq" event={"ID":"1b32816c-6aea-448d-bc18-38a3206ae2d3","Type":"ContainerDied","Data":"b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3"} Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.561042 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f424b656cf4253a8bad62a78206cb051ebef0440215e1c263ec6321d615dd3" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.561008 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1594-account-create-98rkq" Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.569058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d8f1f72f-cb69-43a3-8f06-1f348a731330","Type":"ContainerStarted","Data":"c848269cac7b209ddfbc4263a8e230b8e3be3d43b59caddd5ff4334c8cee6bc7"} Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.572186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerStarted","Data":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} Nov 23 04:12:50 crc kubenswrapper[4751]: I1123 04:12:50.589984 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.286068529 podStartE2EDuration="16.589970138s" podCreationTimestamp="2025-11-23 04:12:34 +0000 UTC" firstStartedPulling="2025-11-23 04:12:35.801871434 +0000 UTC m=+1051.995542793" lastFinishedPulling="2025-11-23 04:12:50.105773053 +0000 UTC m=+1066.299444402" observedRunningTime="2025-11-23 04:12:50.584209276 +0000 UTC m=+1066.777880635" watchObservedRunningTime="2025-11-23 04:12:50.589970138 +0000 UTC m=+1066.783641497" Nov 23 04:12:51 crc kubenswrapper[4751]: I1123 04:12:51.222507 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:51 crc kubenswrapper[4751]: I1123 04:12:51.603951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerStarted","Data":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.016123 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-bc45b664d-wh6ld" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.016307 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerStarted","Data":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616299 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-central-agent" containerID="cri-o://34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" gracePeriod=30 Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616425 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616431 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="proxy-httpd" containerID="cri-o://d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" gracePeriod=30 Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616481 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="sg-core" containerID="cri-o://8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" gracePeriod=30 Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.616520 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-notification-agent" containerID="cri-o://02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" gracePeriod=30 Nov 23 04:12:52 crc kubenswrapper[4751]: I1123 04:12:52.668419 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.220686103 podStartE2EDuration="10.668397628s" podCreationTimestamp="2025-11-23 04:12:42 +0000 UTC" firstStartedPulling="2025-11-23 04:12:43.558792884 +0000 UTC m=+1059.752464243" lastFinishedPulling="2025-11-23 04:12:52.006504409 +0000 UTC m=+1068.200175768" observedRunningTime="2025-11-23 04:12:52.652857528 +0000 UTC m=+1068.846528897" watchObservedRunningTime="2025-11-23 04:12:52.668397628 +0000 UTC m=+1068.862068997" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.390794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483102 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483201 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483287 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483311 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7kxc\" (UniqueName: \"kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc\") pod \"5a09e742-94e6-4866-9d00-27af448b763e\" (UID: \"5a09e742-94e6-4866-9d00-27af448b763e\") " Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.483572 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.484325 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.484537 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.489960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc" (OuterVolumeSpecName: "kube-api-access-f7kxc") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "kube-api-access-f7kxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.504600 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts" (OuterVolumeSpecName: "scripts") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.525625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.586076 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7kxc\" (UniqueName: \"kubernetes.io/projected/5a09e742-94e6-4866-9d00-27af448b763e-kube-api-access-f7kxc\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.586103 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.586112 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a09e742-94e6-4866-9d00-27af448b763e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.586119 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.589558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.606829 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data" (OuterVolumeSpecName: "config-data") pod "5a09e742-94e6-4866-9d00-27af448b763e" (UID: "5a09e742-94e6-4866-9d00-27af448b763e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628227 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a09e742-94e6-4866-9d00-27af448b763e" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" exitCode=0 Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628260 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a09e742-94e6-4866-9d00-27af448b763e" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" exitCode=2 Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628269 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a09e742-94e6-4866-9d00-27af448b763e" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" exitCode=0 Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628278 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a09e742-94e6-4866-9d00-27af448b763e" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" exitCode=0 Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628293 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerDied","Data":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628391 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerDied","Data":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerDied","Data":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerDied","Data":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628421 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a09e742-94e6-4866-9d00-27af448b763e","Type":"ContainerDied","Data":"cda3ee39f68a0ea4e8057d08cba0c66c330090c65c0b53268285a0681079dd0e"} Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.628437 4751 scope.go:117] "RemoveContainer" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.649521 4751 scope.go:117] "RemoveContainer" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.661862 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.669214 4751 scope.go:117] "RemoveContainer" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.674042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.718221 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.718259 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a09e742-94e6-4866-9d00-27af448b763e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736152 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736611 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e20f20-d357-4381-b150-bd0ca23cee86" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736629 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e20f20-d357-4381-b150-bd0ca23cee86" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736638 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="proxy-httpd" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736646 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="proxy-httpd" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736659 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d838efd-6302-4350-a91d-a04a9c37f699" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736664 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d838efd-6302-4350-a91d-a04a9c37f699" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736677 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-central-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736682 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-central-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736690 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e06221-29e8-46f0-9cb2-5c1dfcd166cd" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736698 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e06221-29e8-46f0-9cb2-5c1dfcd166cd" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736706 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-notification-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736711 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-notification-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="sg-core" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736730 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="sg-core" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736742 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd9da06-fe98-41d1-8e37-8207665dad25" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736748 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd9da06-fe98-41d1-8e37-8207665dad25" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736762 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b32816c-6aea-448d-bc18-38a3206ae2d3" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736768 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b32816c-6aea-448d-bc18-38a3206ae2d3" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.736778 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4e0e01-0fde-4d72-8693-f3ac9edde707" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736784 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4e0e01-0fde-4d72-8693-f3ac9edde707" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736961 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b32816c-6aea-448d-bc18-38a3206ae2d3" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736974 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e20f20-d357-4381-b150-bd0ca23cee86" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736985 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd9da06-fe98-41d1-8e37-8207665dad25" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.736994 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="proxy-httpd" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737002 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4e0e01-0fde-4d72-8693-f3ac9edde707" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737011 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="sg-core" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737021 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d838efd-6302-4350-a91d-a04a9c37f699" containerName="mariadb-database-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737030 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-notification-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737040 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e06221-29e8-46f0-9cb2-5c1dfcd166cd" containerName="mariadb-account-create" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.737050 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a09e742-94e6-4866-9d00-27af448b763e" containerName="ceilometer-central-agent" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.738751 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.743254 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.748821 4751 scope.go:117] "RemoveContainer" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.749105 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.749281 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.781896 4751 scope.go:117] "RemoveContainer" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.782388 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": container with ID starting with d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6 not found: ID does not exist" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.782433 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} err="failed to get container status \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": rpc error: code = NotFound desc = could not find container \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": container with ID starting with d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.782464 4751 scope.go:117] "RemoveContainer" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.782734 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": container with ID starting with 8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c not found: ID does not exist" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.782772 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} err="failed to get container status \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": rpc error: code = NotFound desc = could not find container \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": container with ID starting with 8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.782791 4751 scope.go:117] "RemoveContainer" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.782977 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": container with ID starting with 02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9 not found: ID does not exist" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.783012 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} err="failed to get container status \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": rpc error: code = NotFound desc = could not find container \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": container with ID starting with 02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.783029 4751 scope.go:117] "RemoveContainer" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: E1123 04:12:53.785196 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": container with ID starting with 34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a not found: ID does not exist" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.785227 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} err="failed to get container status \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": rpc error: code = NotFound desc = could not find container \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": container with ID starting with 34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.785242 4751 scope.go:117] "RemoveContainer" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.785588 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} err="failed to get container status \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": rpc error: code = NotFound desc = could not find container \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": container with ID starting with d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.785607 4751 scope.go:117] "RemoveContainer" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786173 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} err="failed to get container status \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": rpc error: code = NotFound desc = could not find container \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": container with ID starting with 8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786195 4751 scope.go:117] "RemoveContainer" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786410 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} err="failed to get container status \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": rpc error: code = NotFound desc = could not find container \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": container with ID starting with 02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786432 4751 scope.go:117] "RemoveContainer" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786620 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} err="failed to get container status \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": rpc error: code = NotFound desc = could not find container \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": container with ID starting with 34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.786641 4751 scope.go:117] "RemoveContainer" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787214 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} err="failed to get container status \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": rpc error: code = NotFound desc = could not find container \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": container with ID starting with d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787240 4751 scope.go:117] "RemoveContainer" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787612 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} err="failed to get container status \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": rpc error: code = NotFound desc = could not find container \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": container with ID starting with 8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787641 4751 scope.go:117] "RemoveContainer" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787893 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} err="failed to get container status \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": rpc error: code = NotFound desc = could not find container \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": container with ID starting with 02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.787905 4751 scope.go:117] "RemoveContainer" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788090 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} err="failed to get container status \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": rpc error: code = NotFound desc = could not find container \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": container with ID starting with 34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788103 4751 scope.go:117] "RemoveContainer" containerID="d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788310 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6"} err="failed to get container status \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": rpc error: code = NotFound desc = could not find container \"d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6\": container with ID starting with d7d4ca3612317bc274d20746f332f106dd77addbf5641012c041fcad74c040e6 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788331 4751 scope.go:117] "RemoveContainer" containerID="8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788623 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c"} err="failed to get container status \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": rpc error: code = NotFound desc = could not find container \"8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c\": container with ID starting with 8aafa24dcd4114cbce2a355abec9c44718ba71a31df4c54d9c4b15f755aa741c not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.788640 4751 scope.go:117] "RemoveContainer" containerID="02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.789483 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9"} err="failed to get container status \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": rpc error: code = NotFound desc = could not find container \"02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9\": container with ID starting with 02596caf7e287616c8e6f84df23ce9729f1a7d6609ea5651acc60445d856e9d9 not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.789511 4751 scope.go:117] "RemoveContainer" containerID="34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.789746 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a"} err="failed to get container status \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": rpc error: code = NotFound desc = could not find container \"34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a\": container with ID starting with 34903d5a2b1b5eac02a2c390c317501587d605365bec9bf16292b48e17d2213a not found: ID does not exist" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921726 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddwjv\" (UniqueName: \"kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921915 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:53 crc kubenswrapper[4751]: I1123 04:12:53.921956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.023152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddwjv\" (UniqueName: \"kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.023625 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.024151 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.024241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.024858 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.024913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.024958 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.025012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.025312 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.029569 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.030097 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.030601 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.031073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.042064 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddwjv\" (UniqueName: \"kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv\") pod \"ceilometer-0\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.072698 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.559183 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.572184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.637223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerStarted","Data":"65be85eb6c9faa17645d6d69dd7bebfedb7c3aba7ee51beca0258e4d46d546b3"} Nov 23 04:12:54 crc kubenswrapper[4751]: I1123 04:12:54.683244 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a09e742-94e6-4866-9d00-27af448b763e" path="/var/lib/kubelet/pods/5a09e742-94e6-4866-9d00-27af448b763e/volumes" Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.654091 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerStarted","Data":"195b151c5aa696889f3e769f98cd9ad3a11888fbd70724070ad788cd41955721"} Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.972559 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2l582"] Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.973649 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.975230 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.976871 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.978750 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pr24r" Nov 23 04:12:55 crc kubenswrapper[4751]: I1123 04:12:55.994732 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2l582"] Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.164309 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cw6\" (UniqueName: \"kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.164629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.164985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.165207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.266637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.267649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2cw6\" (UniqueName: \"kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.267775 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.267885 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.274205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.274221 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.274704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.285759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2cw6\" (UniqueName: \"kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6\") pod \"nova-cell0-conductor-db-sync-2l582\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.288585 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:12:56 crc kubenswrapper[4751]: I1123 04:12:56.723777 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2l582"] Nov 23 04:12:56 crc kubenswrapper[4751]: W1123 04:12:56.727651 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94895060_a23e_4768_b800_3ca2557264fd.slice/crio-55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de WatchSource:0}: Error finding container 55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de: Status 404 returned error can't find the container with id 55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.673724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2l582" event={"ID":"94895060-a23e-4768-b800-3ca2557264fd","Type":"ContainerStarted","Data":"55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de"} Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.679869 4751 generic.go:334] "Generic (PLEG): container finished" podID="3ce82842-e359-4824-abb2-6c652caf36ca" containerID="7e524121e462318f5ee78e48f212c7932519de6fda8d0e8b7d02daab42da34b7" exitCode=137 Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.679975 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerDied","Data":"7e524121e462318f5ee78e48f212c7932519de6fda8d0e8b7d02daab42da34b7"} Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.682769 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerStarted","Data":"73c422c1c8333e650173b7cf879fa479ac91e87b643e1c9fdab81b0995d459ca"} Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.960607 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.998241 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.998288 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.998316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.998364 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.999305 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.999442 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:57 crc kubenswrapper[4751]: I1123 04:12:57.999470 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcmmx\" (UniqueName: \"kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx\") pod \"3ce82842-e359-4824-abb2-6c652caf36ca\" (UID: \"3ce82842-e359-4824-abb2-6c652caf36ca\") " Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:57.999983 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs" (OuterVolumeSpecName: "logs") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.000202 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ce82842-e359-4824-abb2-6c652caf36ca-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.009465 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.010095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx" (OuterVolumeSpecName: "kube-api-access-gcmmx") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "kube-api-access-gcmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.028213 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.030506 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts" (OuterVolumeSpecName: "scripts") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.033685 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data" (OuterVolumeSpecName: "config-data") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.060680 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3ce82842-e359-4824-abb2-6c652caf36ca" (UID: "3ce82842-e359-4824-abb2-6c652caf36ca"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101201 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101237 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ce82842-e359-4824-abb2-6c652caf36ca-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101248 4751 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101262 4751 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101275 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce82842-e359-4824-abb2-6c652caf36ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.101284 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcmmx\" (UniqueName: \"kubernetes.io/projected/3ce82842-e359-4824-abb2-6c652caf36ca-kube-api-access-gcmmx\") on node \"crc\" DevicePath \"\"" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.534626 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.534861 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-log" containerID="cri-o://6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512" gracePeriod=30 Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.534970 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-httpd" containerID="cri-o://3fd994d1a4d18b2d5cca8f908653e334be8ba7c78ce97f28bc3e17d2c2c40c7a" gracePeriod=30 Nov 23 04:12:58 crc kubenswrapper[4751]: E1123 04:12:58.603773 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff26108b_8bb2_4135_acbd_49bdd6fb9940.slice/crio-conmon-6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512.scope\": RecentStats: unable to find data in memory cache]" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.709572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerStarted","Data":"d9023461fea54c9fbeac1c5b5d5d1b23fc656b57c9e16117903ed72186d512b7"} Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.713079 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc45b664d-wh6ld" event={"ID":"3ce82842-e359-4824-abb2-6c652caf36ca","Type":"ContainerDied","Data":"e4a836d9b29795ac09ee427deff51ed5333fdb46195035d2d9aa87de9308326a"} Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.713133 4751 scope.go:117] "RemoveContainer" containerID="9479c96ecd501a93127a0cba95158d6fc07b3db5c33697a330901fef10a6c446" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.713294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc45b664d-wh6ld" Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.722202 4751 generic.go:334] "Generic (PLEG): container finished" podID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerID="6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512" exitCode=143 Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.722257 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerDied","Data":"6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512"} Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.743217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.751582 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bc45b664d-wh6ld"] Nov 23 04:12:58 crc kubenswrapper[4751]: I1123 04:12:58.941559 4751 scope.go:117] "RemoveContainer" containerID="7e524121e462318f5ee78e48f212c7932519de6fda8d0e8b7d02daab42da34b7" Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.214503 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.214788 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-log" containerID="cri-o://68a049f1251d1fb6d133440d77c5d2354bb40477c13fb49e64759fea660944c0" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.214890 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-httpd" containerID="cri-o://9102cb5ee21962a7721f1fda43287ab7696cf30ee6bcf956226b218e9199c8f4" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.736625 4751 generic.go:334] "Generic (PLEG): container finished" podID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerID="68a049f1251d1fb6d133440d77c5d2354bb40477c13fb49e64759fea660944c0" exitCode=143 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.736715 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerDied","Data":"68a049f1251d1fb6d133440d77c5d2354bb40477c13fb49e64759fea660944c0"} Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerStarted","Data":"50f11ef90b37dfe8f14d32269d395264b59592abc8355fd0b715de682bd7c3a4"} Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739590 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-central-agent" containerID="cri-o://195b151c5aa696889f3e769f98cd9ad3a11888fbd70724070ad788cd41955721" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739848 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739852 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="sg-core" containerID="cri-o://d9023461fea54c9fbeac1c5b5d5d1b23fc656b57c9e16117903ed72186d512b7" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739890 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-notification-agent" containerID="cri-o://73c422c1c8333e650173b7cf879fa479ac91e87b643e1c9fdab81b0995d459ca" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.739884 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="proxy-httpd" containerID="cri-o://50f11ef90b37dfe8f14d32269d395264b59592abc8355fd0b715de682bd7c3a4" gracePeriod=30 Nov 23 04:12:59 crc kubenswrapper[4751]: I1123 04:12:59.760190 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.897732907 podStartE2EDuration="6.760174824s" podCreationTimestamp="2025-11-23 04:12:53 +0000 UTC" firstStartedPulling="2025-11-23 04:12:54.58298625 +0000 UTC m=+1070.776657609" lastFinishedPulling="2025-11-23 04:12:59.445428167 +0000 UTC m=+1075.639099526" observedRunningTime="2025-11-23 04:12:59.7584908 +0000 UTC m=+1075.952162169" watchObservedRunningTime="2025-11-23 04:12:59.760174824 +0000 UTC m=+1075.953846183" Nov 23 04:13:00 crc kubenswrapper[4751]: I1123 04:13:00.655451 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" path="/var/lib/kubelet/pods/3ce82842-e359-4824-abb2-6c652caf36ca/volumes" Nov 23 04:13:00 crc kubenswrapper[4751]: I1123 04:13:00.753183 4751 generic.go:334] "Generic (PLEG): container finished" podID="30a4aaad-405d-4536-9844-e0c597f6467a" containerID="d9023461fea54c9fbeac1c5b5d5d1b23fc656b57c9e16117903ed72186d512b7" exitCode=2 Nov 23 04:13:00 crc kubenswrapper[4751]: I1123 04:13:00.753224 4751 generic.go:334] "Generic (PLEG): container finished" podID="30a4aaad-405d-4536-9844-e0c597f6467a" containerID="73c422c1c8333e650173b7cf879fa479ac91e87b643e1c9fdab81b0995d459ca" exitCode=0 Nov 23 04:13:00 crc kubenswrapper[4751]: I1123 04:13:00.753247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerDied","Data":"d9023461fea54c9fbeac1c5b5d5d1b23fc656b57c9e16117903ed72186d512b7"} Nov 23 04:13:00 crc kubenswrapper[4751]: I1123 04:13:00.753274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerDied","Data":"73c422c1c8333e650173b7cf879fa479ac91e87b643e1c9fdab81b0995d459ca"} Nov 23 04:13:01 crc kubenswrapper[4751]: I1123 04:13:01.774772 4751 generic.go:334] "Generic (PLEG): container finished" podID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerID="3fd994d1a4d18b2d5cca8f908653e334be8ba7c78ce97f28bc3e17d2c2c40c7a" exitCode=0 Nov 23 04:13:01 crc kubenswrapper[4751]: I1123 04:13:01.774824 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerDied","Data":"3fd994d1a4d18b2d5cca8f908653e334be8ba7c78ce97f28bc3e17d2c2c40c7a"} Nov 23 04:13:02 crc kubenswrapper[4751]: I1123 04:13:02.790194 4751 generic.go:334] "Generic (PLEG): container finished" podID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerID="9102cb5ee21962a7721f1fda43287ab7696cf30ee6bcf956226b218e9199c8f4" exitCode=0 Nov 23 04:13:02 crc kubenswrapper[4751]: I1123 04:13:02.790253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerDied","Data":"9102cb5ee21962a7721f1fda43287ab7696cf30ee6bcf956226b218e9199c8f4"} Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.434905 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.539413 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.559851 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.559964 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5nf\" (UniqueName: \"kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560096 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560144 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.560369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run\") pod \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\" (UID: \"ff26108b-8bb2-4135-acbd-49bdd6fb9940\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.561282 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs" (OuterVolumeSpecName: "logs") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.561771 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.568889 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.570854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts" (OuterVolumeSpecName: "scripts") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.578333 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf" (OuterVolumeSpecName: "kube-api-access-wb5nf") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "kube-api-access-wb5nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.611185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661534 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nlb5\" (UniqueName: \"kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661687 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661763 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661779 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.661803 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs\") pod \"49606943-d83e-4d27-9e18-88ae5a000b6c\" (UID: \"49606943-d83e-4d27-9e18-88ae5a000b6c\") " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662129 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5nf\" (UniqueName: \"kubernetes.io/projected/ff26108b-8bb2-4135-acbd-49bdd6fb9940-kube-api-access-wb5nf\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662139 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662148 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662164 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662173 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662181 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff26108b-8bb2-4135-acbd-49bdd6fb9940-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.662606 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.666791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs" (OuterVolumeSpecName: "logs") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.669591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5" (OuterVolumeSpecName: "kube-api-access-4nlb5") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "kube-api-access-4nlb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.669683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts" (OuterVolumeSpecName: "scripts") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.669699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.673872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.706715 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.708533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.723391 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data" (OuterVolumeSpecName: "config-data") pod "ff26108b-8bb2-4135-acbd-49bdd6fb9940" (UID: "ff26108b-8bb2-4135-acbd-49bdd6fb9940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.726253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data" (OuterVolumeSpecName: "config-data") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.759592 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "49606943-d83e-4d27-9e18-88ae5a000b6c" (UID: "49606943-d83e-4d27-9e18-88ae5a000b6c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763564 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763598 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763612 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763620 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763628 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49606943-d83e-4d27-9e18-88ae5a000b6c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763638 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763647 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nlb5\" (UniqueName: \"kubernetes.io/projected/49606943-d83e-4d27-9e18-88ae5a000b6c-kube-api-access-4nlb5\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763657 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763668 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49606943-d83e-4d27-9e18-88ae5a000b6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763681 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26108b-8bb2-4135-acbd-49bdd6fb9940-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.763703 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.788662 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.823737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff26108b-8bb2-4135-acbd-49bdd6fb9940","Type":"ContainerDied","Data":"2bb69210774a9b4297b16a3e21e54a8b7cfde957ccaf528736003e241e277fd9"} Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.823785 4751 scope.go:117] "RemoveContainer" containerID="3fd994d1a4d18b2d5cca8f908653e334be8ba7c78ce97f28bc3e17d2c2c40c7a" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.823837 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.826498 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49606943-d83e-4d27-9e18-88ae5a000b6c","Type":"ContainerDied","Data":"89e2f7ba4f87f5aaad3af50dc0c85cbc89ec8f5d3cd8c0ea76ab105f426919aa"} Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.826587 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.833164 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2l582" event={"ID":"94895060-a23e-4768-b800-3ca2557264fd","Type":"ContainerStarted","Data":"f9f18f775011501d6074511c7fcad3e5203a2418ea916e77f1cdcaf2e00a8fea"} Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.849604 4751 scope.go:117] "RemoveContainer" containerID="6ec27529143d8d4189383d1669d93d38347901c6fcb8b737b7b64d8a08279512" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.860142 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2l582" podStartSLOduration=2.506006809 podStartE2EDuration="10.86012394s" podCreationTimestamp="2025-11-23 04:12:55 +0000 UTC" firstStartedPulling="2025-11-23 04:12:56.729572292 +0000 UTC m=+1072.923243651" lastFinishedPulling="2025-11-23 04:13:05.083689423 +0000 UTC m=+1081.277360782" observedRunningTime="2025-11-23 04:13:05.849936811 +0000 UTC m=+1082.043608190" watchObservedRunningTime="2025-11-23 04:13:05.86012394 +0000 UTC m=+1082.053795299" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.867553 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.876153 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.904483 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.909812 4751 scope.go:117] "RemoveContainer" containerID="9102cb5ee21962a7721f1fda43287ab7696cf30ee6bcf956226b218e9199c8f4" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.916040 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.931785 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932186 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932197 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932211 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932217 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932230 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932238 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932255 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932261 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932276 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932281 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon-log" Nov 23 04:13:05 crc kubenswrapper[4751]: E1123 04:13:05.932294 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.932300 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934544 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934578 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934591 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934603 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce82842-e359-4824-abb2-6c652caf36ca" containerName="horizon-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934613 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-httpd" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.934625 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" containerName="glance-log" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.937244 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.939156 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.941063 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.941635 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hp66g" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.941748 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.943938 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.948904 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.949809 4751 scope.go:117] "RemoveContainer" containerID="68a049f1251d1fb6d133440d77c5d2354bb40477c13fb49e64759fea660944c0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.956832 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.958480 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.962593 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.965806 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-logs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969523 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clfdd\" (UniqueName: \"kubernetes.io/projected/06ebe447-bb04-442d-9fdf-752c1dd5a747-kube-api-access-clfdd\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-config-data\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969640 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969678 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969766 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.969792 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-scripts\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:05 crc kubenswrapper[4751]: I1123 04:13:05.975909 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071453 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071789 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071834 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071859 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.071868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.072296 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.072415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.072524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-scripts\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.072662 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.072788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.073022 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-logs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.073124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg78n\" (UniqueName: \"kubernetes.io/projected/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-kube-api-access-xg78n\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.073225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clfdd\" (UniqueName: \"kubernetes.io/projected/06ebe447-bb04-442d-9fdf-752c1dd5a747-kube-api-access-clfdd\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.073295 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-config-data\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.073428 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.074583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.076195 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06ebe447-bb04-442d-9fdf-752c1dd5a747-logs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.076805 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.077268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.077912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-scripts\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.092155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clfdd\" (UniqueName: \"kubernetes.io/projected/06ebe447-bb04-442d-9fdf-752c1dd5a747-kube-api-access-clfdd\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.095273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ebe447-bb04-442d-9fdf-752c1dd5a747-config-data\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.095878 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"06ebe447-bb04-442d-9fdf-752c1dd5a747\") " pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175516 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175553 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg78n\" (UniqueName: \"kubernetes.io/projected/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-kube-api-access-xg78n\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175718 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175768 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.175803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.176301 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.176446 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.176491 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-logs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.179576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.180028 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.184255 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.188592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.192746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg78n\" (UniqueName: \"kubernetes.io/projected/3c09a9e4-3f1b-4732-9b6b-fcd54fe21650-kube-api-access-xg78n\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.217586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650\") " pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.271945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.361826 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.666743 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49606943-d83e-4d27-9e18-88ae5a000b6c" path="/var/lib/kubelet/pods/49606943-d83e-4d27-9e18-88ae5a000b6c/volumes" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.667430 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff26108b-8bb2-4135-acbd-49bdd6fb9940" path="/var/lib/kubelet/pods/ff26108b-8bb2-4135-acbd-49bdd6fb9940/volumes" Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.845181 4751 generic.go:334] "Generic (PLEG): container finished" podID="30a4aaad-405d-4536-9844-e0c597f6467a" containerID="195b151c5aa696889f3e769f98cd9ad3a11888fbd70724070ad788cd41955721" exitCode=0 Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.846025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerDied","Data":"195b151c5aa696889f3e769f98cd9ad3a11888fbd70724070ad788cd41955721"} Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.859784 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 04:13:06 crc kubenswrapper[4751]: W1123 04:13:06.862441 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ebe447_bb04_442d_9fdf_752c1dd5a747.slice/crio-d8dd870566cfc264d4c417bee185a5600bc774cbdc4335fde87a744fb121068b WatchSource:0}: Error finding container d8dd870566cfc264d4c417bee185a5600bc774cbdc4335fde87a744fb121068b: Status 404 returned error can't find the container with id d8dd870566cfc264d4c417bee185a5600bc774cbdc4335fde87a744fb121068b Nov 23 04:13:06 crc kubenswrapper[4751]: I1123 04:13:06.975397 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 04:13:06 crc kubenswrapper[4751]: W1123 04:13:06.988959 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c09a9e4_3f1b_4732_9b6b_fcd54fe21650.slice/crio-2e5155a4d983e38b4696c09a0e9f5a860134bae957c277e9e4ea6cc66abaf9ea WatchSource:0}: Error finding container 2e5155a4d983e38b4696c09a0e9f5a860134bae957c277e9e4ea6cc66abaf9ea: Status 404 returned error can't find the container with id 2e5155a4d983e38b4696c09a0e9f5a860134bae957c277e9e4ea6cc66abaf9ea Nov 23 04:13:07 crc kubenswrapper[4751]: I1123 04:13:07.857148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06ebe447-bb04-442d-9fdf-752c1dd5a747","Type":"ContainerStarted","Data":"4899b296cd8248668cad0e311127b13cd9d80911cd449425f1377ad3a377925a"} Nov 23 04:13:07 crc kubenswrapper[4751]: I1123 04:13:07.857638 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06ebe447-bb04-442d-9fdf-752c1dd5a747","Type":"ContainerStarted","Data":"d8dd870566cfc264d4c417bee185a5600bc774cbdc4335fde87a744fb121068b"} Nov 23 04:13:07 crc kubenswrapper[4751]: I1123 04:13:07.861525 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650","Type":"ContainerStarted","Data":"0f0b00884456736d98b5d6eb3a425bcbf81ac2fb4435471122abc1a5f0931247"} Nov 23 04:13:07 crc kubenswrapper[4751]: I1123 04:13:07.861595 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650","Type":"ContainerStarted","Data":"2e5155a4d983e38b4696c09a0e9f5a860134bae957c277e9e4ea6cc66abaf9ea"} Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.117175 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.117228 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.871333 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"06ebe447-bb04-442d-9fdf-752c1dd5a747","Type":"ContainerStarted","Data":"f7d6409cd10893995e16198bd010ec99ab6274550855f1aadd66685b086911cf"} Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.875297 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3c09a9e4-3f1b-4732-9b6b-fcd54fe21650","Type":"ContainerStarted","Data":"150ab67d1c0e1d5b14fe7020941d4aab8fa962dfeb5ad6259bc574e5a3c07acd"} Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.905545 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.905526212 podStartE2EDuration="3.905526212s" podCreationTimestamp="2025-11-23 04:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:08.896784752 +0000 UTC m=+1085.090456121" watchObservedRunningTime="2025-11-23 04:13:08.905526212 +0000 UTC m=+1085.099197571" Nov 23 04:13:08 crc kubenswrapper[4751]: I1123 04:13:08.932759 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.932736232 podStartE2EDuration="3.932736232s" podCreationTimestamp="2025-11-23 04:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:08.923251341 +0000 UTC m=+1085.116922700" watchObservedRunningTime="2025-11-23 04:13:08.932736232 +0000 UTC m=+1085.126407591" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.272982 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.273716 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.315913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.318749 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.363243 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.363308 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.401368 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.410631 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.967381 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.967437 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.967448 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 04:13:16 crc kubenswrapper[4751]: I1123 04:13:16.967456 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:18 crc kubenswrapper[4751]: I1123 04:13:18.847046 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 04:13:18 crc kubenswrapper[4751]: I1123 04:13:18.985899 4751 generic.go:334] "Generic (PLEG): container finished" podID="94895060-a23e-4768-b800-3ca2557264fd" containerID="f9f18f775011501d6074511c7fcad3e5203a2418ea916e77f1cdcaf2e00a8fea" exitCode=0 Nov 23 04:13:18 crc kubenswrapper[4751]: I1123 04:13:18.985976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2l582" event={"ID":"94895060-a23e-4768-b800-3ca2557264fd","Type":"ContainerDied","Data":"f9f18f775011501d6074511c7fcad3e5203a2418ea916e77f1cdcaf2e00a8fea"} Nov 23 04:13:18 crc kubenswrapper[4751]: I1123 04:13:18.986004 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 04:13:18 crc kubenswrapper[4751]: I1123 04:13:18.995393 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 04:13:19 crc kubenswrapper[4751]: I1123 04:13:19.009167 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:19 crc kubenswrapper[4751]: I1123 04:13:19.009535 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 04:13:19 crc kubenswrapper[4751]: I1123 04:13:19.230433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.355542 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.465898 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle\") pod \"94895060-a23e-4768-b800-3ca2557264fd\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.465958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2cw6\" (UniqueName: \"kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6\") pod \"94895060-a23e-4768-b800-3ca2557264fd\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.466031 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data\") pod \"94895060-a23e-4768-b800-3ca2557264fd\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.466077 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts\") pod \"94895060-a23e-4768-b800-3ca2557264fd\" (UID: \"94895060-a23e-4768-b800-3ca2557264fd\") " Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.473575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts" (OuterVolumeSpecName: "scripts") pod "94895060-a23e-4768-b800-3ca2557264fd" (UID: "94895060-a23e-4768-b800-3ca2557264fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.473703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6" (OuterVolumeSpecName: "kube-api-access-z2cw6") pod "94895060-a23e-4768-b800-3ca2557264fd" (UID: "94895060-a23e-4768-b800-3ca2557264fd"). InnerVolumeSpecName "kube-api-access-z2cw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.502980 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94895060-a23e-4768-b800-3ca2557264fd" (UID: "94895060-a23e-4768-b800-3ca2557264fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.514313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data" (OuterVolumeSpecName: "config-data") pod "94895060-a23e-4768-b800-3ca2557264fd" (UID: "94895060-a23e-4768-b800-3ca2557264fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.568317 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.568672 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2cw6\" (UniqueName: \"kubernetes.io/projected/94895060-a23e-4768-b800-3ca2557264fd-kube-api-access-z2cw6\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.568686 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:20 crc kubenswrapper[4751]: I1123 04:13:20.568698 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94895060-a23e-4768-b800-3ca2557264fd-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.006603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2l582" event={"ID":"94895060-a23e-4768-b800-3ca2557264fd","Type":"ContainerDied","Data":"55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de"} Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.006638 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55322e47234da1f32b6063026768807497f639bfefc088215f237a2e2e1304de" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.006685 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2l582" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.098175 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 04:13:21 crc kubenswrapper[4751]: E1123 04:13:21.098684 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94895060-a23e-4768-b800-3ca2557264fd" containerName="nova-cell0-conductor-db-sync" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.098706 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="94895060-a23e-4768-b800-3ca2557264fd" containerName="nova-cell0-conductor-db-sync" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.098983 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="94895060-a23e-4768-b800-3ca2557264fd" containerName="nova-cell0-conductor-db-sync" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.099704 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.103320 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.103620 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-pr24r" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.123057 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.177476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.177547 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6z9p\" (UniqueName: \"kubernetes.io/projected/1085be93-49b1-4d78-818d-ef37248136f4-kube-api-access-n6z9p\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.177600 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.279696 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.279777 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6z9p\" (UniqueName: \"kubernetes.io/projected/1085be93-49b1-4d78-818d-ef37248136f4-kube-api-access-n6z9p\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.279846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.284280 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.284447 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085be93-49b1-4d78-818d-ef37248136f4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.300305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6z9p\" (UniqueName: \"kubernetes.io/projected/1085be93-49b1-4d78-818d-ef37248136f4-kube-api-access-n6z9p\") pod \"nova-cell0-conductor-0\" (UID: \"1085be93-49b1-4d78-818d-ef37248136f4\") " pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.426779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:21 crc kubenswrapper[4751]: I1123 04:13:21.883639 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 04:13:22 crc kubenswrapper[4751]: I1123 04:13:22.039044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1085be93-49b1-4d78-818d-ef37248136f4","Type":"ContainerStarted","Data":"c1c401c7804ecd93eb4d2471f425ea557bb0e401a8380bce0ff1878d287e5bd0"} Nov 23 04:13:23 crc kubenswrapper[4751]: I1123 04:13:23.053404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1085be93-49b1-4d78-818d-ef37248136f4","Type":"ContainerStarted","Data":"8d7fa15cab508991f0fbb6041c3e181e95855fce2800828df24b245d624ff08f"} Nov 23 04:13:23 crc kubenswrapper[4751]: I1123 04:13:23.054866 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:24 crc kubenswrapper[4751]: I1123 04:13:24.078934 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.139030 4751 generic.go:334] "Generic (PLEG): container finished" podID="30a4aaad-405d-4536-9844-e0c597f6467a" containerID="50f11ef90b37dfe8f14d32269d395264b59592abc8355fd0b715de682bd7c3a4" exitCode=137 Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.139082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerDied","Data":"50f11ef90b37dfe8f14d32269d395264b59592abc8355fd0b715de682bd7c3a4"} Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.139557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30a4aaad-405d-4536-9844-e0c597f6467a","Type":"ContainerDied","Data":"65be85eb6c9faa17645d6d69dd7bebfedb7c3aba7ee51beca0258e4d46d546b3"} Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.139584 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65be85eb6c9faa17645d6d69dd7bebfedb7c3aba7ee51beca0258e4d46d546b3" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.143578 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.168221 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=9.168201742 podStartE2EDuration="9.168201742s" podCreationTimestamp="2025-11-23 04:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:23.079112827 +0000 UTC m=+1099.272784196" watchObservedRunningTime="2025-11-23 04:13:30.168201742 +0000 UTC m=+1106.361873101" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.249017 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.249842 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddwjv\" (UniqueName: \"kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.249877 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.249912 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.249990 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.250047 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.250117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data\") pod \"30a4aaad-405d-4536-9844-e0c597f6467a\" (UID: \"30a4aaad-405d-4536-9844-e0c597f6467a\") " Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.252531 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.252775 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.258786 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts" (OuterVolumeSpecName: "scripts") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.261143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv" (OuterVolumeSpecName: "kube-api-access-ddwjv") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "kube-api-access-ddwjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.297763 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.352287 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.352316 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.352327 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddwjv\" (UniqueName: \"kubernetes.io/projected/30a4aaad-405d-4536-9844-e0c597f6467a-kube-api-access-ddwjv\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.352337 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30a4aaad-405d-4536-9844-e0c597f6467a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.352359 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.355250 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data" (OuterVolumeSpecName: "config-data") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.357922 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30a4aaad-405d-4536-9844-e0c597f6467a" (UID: "30a4aaad-405d-4536-9844-e0c597f6467a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.454092 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:30 crc kubenswrapper[4751]: I1123 04:13:30.454121 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a4aaad-405d-4536-9844-e0c597f6467a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.147671 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.167164 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.174469 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201051 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:13:31 crc kubenswrapper[4751]: E1123 04:13:31.201564 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-central-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201582 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-central-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: E1123 04:13:31.201600 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-notification-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201608 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-notification-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: E1123 04:13:31.201621 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="sg-core" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201627 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="sg-core" Nov 23 04:13:31 crc kubenswrapper[4751]: E1123 04:13:31.201640 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="proxy-httpd" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201645 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="proxy-httpd" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201815 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-central-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201824 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="sg-core" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201839 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="ceilometer-notification-agent" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.201859 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" containerName="proxy-httpd" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.203432 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.205201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.205908 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.217506 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270131 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsxs\" (UniqueName: \"kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.270570 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371556 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsxs\" (UniqueName: \"kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371673 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371693 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.371768 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.372408 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.372614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.381078 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.381776 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.384165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.385063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.404112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsxs\" (UniqueName: \"kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs\") pod \"ceilometer-0\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.469942 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.524509 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.980274 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2c7qw"] Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.981846 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.984110 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.984248 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.986972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.987127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.987275 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4w8\" (UniqueName: \"kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.987324 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:31 crc kubenswrapper[4751]: I1123 04:13:31.999411 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2c7qw"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.058584 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.088503 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.088595 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4w8\" (UniqueName: \"kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.088617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.088655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.100099 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.100129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.100619 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.134183 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4w8\" (UniqueName: \"kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8\") pod \"nova-cell0-cell-mapping-2c7qw\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.149661 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.152157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.160490 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.168477 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerStarted","Data":"1ae8ed0752933db858fb90e36bc1479aaa7fef9cfcacc3ce6c85d4b5dbe3dd48"} Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.201662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.239477 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.240973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.244928 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.277093 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.293520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.293557 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jwn\" (UniqueName: \"kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.293648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.293711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.309794 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.328726 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.329934 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.333338 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.371447 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.395841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phk9x\" (UniqueName: \"kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.395920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.395947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.395963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.396029 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.396047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.396079 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.396639 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jwn\" (UniqueName: \"kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.397377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.400008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.400056 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.401354 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.403700 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.410313 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.445046 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.446577 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.455211 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jwn\" (UniqueName: \"kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn\") pod \"nova-api-0\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.484413 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.494791 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498603 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgww4\" (UniqueName: \"kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498700 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnhnb\" (UniqueName: \"kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498724 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498776 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phk9x\" (UniqueName: \"kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.498857 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.502399 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.502958 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.506839 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.508046 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.524716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phk9x\" (UniqueName: \"kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x\") pod \"nova-metadata-0\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.558587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600286 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88szk\" (UniqueName: \"kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgww4\" (UniqueName: \"kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600417 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnhnb\" (UniqueName: \"kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600535 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.600618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.605817 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.606488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.607953 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.608502 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.638291 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnhnb\" (UniqueName: \"kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.645573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgww4\" (UniqueName: \"kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4\") pod \"nova-scheduler-0\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.686744 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a4aaad-405d-4536-9844-e0c597f6467a" path="/var/lib/kubelet/pods/30a4aaad-405d-4536-9844-e0c597f6467a/volumes" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.701909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.701984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.702023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.702070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.702131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.702194 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88szk\" (UniqueName: \"kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.703280 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.704215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.704873 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.704875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.705560 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.716975 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.721197 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88szk\" (UniqueName: \"kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk\") pod \"dnsmasq-dns-bccf8f775-6xqsc\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.728619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.778784 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:32 crc kubenswrapper[4751]: I1123 04:13:32.818373 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2c7qw"] Nov 23 04:13:32 crc kubenswrapper[4751]: W1123 04:13:32.848965 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda210f151_b9cb_46c5_8493_0e6b9629b117.slice/crio-2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721 WatchSource:0}: Error finding container 2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721: Status 404 returned error can't find the container with id 2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721 Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.101219 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.160912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.187659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerStarted","Data":"0b34df10e8bf7ec5a3741268e0067fd2d247efefe1cca4d9e6cc6c92dfe8f1d8"} Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.202602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2c7qw" event={"ID":"a210f151-b9cb-46c5-8493-0e6b9629b117","Type":"ContainerStarted","Data":"2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721"} Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.238165 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.272604 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.316280 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8gf78"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.318136 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.320752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.323852 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.338261 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8gf78"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.338910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.338992 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.339014 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.339037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89794\" (UniqueName: \"kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.419101 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.440775 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.440852 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.440881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89794\" (UniqueName: \"kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.441027 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.450771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.451685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.455119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.457951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89794\" (UniqueName: \"kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794\") pod \"nova-cell1-conductor-db-sync-8gf78\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:33 crc kubenswrapper[4751]: I1123 04:13:33.649126 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.204450 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8gf78"] Nov 23 04:13:34 crc kubenswrapper[4751]: W1123 04:13:34.241873 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf577aed3_fab8_4a7e_9beb_4cb6d472530b.slice/crio-8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e WatchSource:0}: Error finding container 8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e: Status 404 returned error can't find the container with id 8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.249472 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ab61fde-edb6-4a1d-8a8f-597bc03064f2","Type":"ContainerStarted","Data":"05b24ad5f981f864c7708a9a953b620757e4741440f68646299f7e6d8ec46ac9"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.251765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2c7qw" event={"ID":"a210f151-b9cb-46c5-8493-0e6b9629b117","Type":"ContainerStarted","Data":"1cf5c8a9a3b6863f35db543172a8eb9110d1dc866fdb98158681e857f373a699"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.261505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerStarted","Data":"e8e3dfec01bd593399dce4c7f313328a1da61086d603457c89507b0d0d0fd6d4"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.291371 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerStarted","Data":"57a058947e596dffbfa59aa761c4ccba6320eb93497f2ec7e146bd68f9fb9188"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.291947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerStarted","Data":"225ddfaa6053f9b9c41c0be81e9d9497177c419bb2d8d4b152150a8d52d19111"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.296546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc","Type":"ContainerStarted","Data":"4e66e1dfbe1f2a2a6199dc9a41880a097fd5a4f083a383bc7f3081abd1f49072"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.301283 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2c7qw" podStartSLOduration=3.301270757 podStartE2EDuration="3.301270757s" podCreationTimestamp="2025-11-23 04:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:34.29189073 +0000 UTC m=+1110.485562089" watchObservedRunningTime="2025-11-23 04:13:34.301270757 +0000 UTC m=+1110.494942116" Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.311680 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerID="6d428b9413aece6d1558e48b34e81533cd6182ab760056614e53148661e46225" exitCode=0 Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.311727 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" event={"ID":"eb43151b-50c2-46d9-8c0d-b9fe0573aa31","Type":"ContainerDied","Data":"6d428b9413aece6d1558e48b34e81533cd6182ab760056614e53148661e46225"} Nov 23 04:13:34 crc kubenswrapper[4751]: I1123 04:13:34.311752 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" event={"ID":"eb43151b-50c2-46d9-8c0d-b9fe0573aa31","Type":"ContainerStarted","Data":"1eb306af82112f0926ed1d36b4882e14a8e40273059110dcdf779731c5e7b803"} Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.323678 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerStarted","Data":"1373e3b47122d74d50d20c7e031e1c4731a7c5e6437cdb6f5505e20f30e89f14"} Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.326044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8gf78" event={"ID":"f577aed3-fab8-4a7e-9beb-4cb6d472530b","Type":"ContainerStarted","Data":"b732c3b3b170f5158fc963a3d9b0a0fcbe44c10c1d2d7a2497b4e3208d74b91f"} Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.326101 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8gf78" event={"ID":"f577aed3-fab8-4a7e-9beb-4cb6d472530b","Type":"ContainerStarted","Data":"8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e"} Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.332954 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" event={"ID":"eb43151b-50c2-46d9-8c0d-b9fe0573aa31","Type":"ContainerStarted","Data":"e5b828d2434689aad86371b4d236350434d18b6f2987fd4086a6187cd6851426"} Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.333106 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.371224 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-8gf78" podStartSLOduration=2.371200857 podStartE2EDuration="2.371200857s" podCreationTimestamp="2025-11-23 04:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:35.348195329 +0000 UTC m=+1111.541866698" watchObservedRunningTime="2025-11-23 04:13:35.371200857 +0000 UTC m=+1111.564872216" Nov 23 04:13:35 crc kubenswrapper[4751]: I1123 04:13:35.406871 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" podStartSLOduration=3.4068477870000002 podStartE2EDuration="3.406847787s" podCreationTimestamp="2025-11-23 04:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:35.370103618 +0000 UTC m=+1111.563774977" watchObservedRunningTime="2025-11-23 04:13:35.406847787 +0000 UTC m=+1111.600519146" Nov 23 04:13:36 crc kubenswrapper[4751]: I1123 04:13:36.492312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:36 crc kubenswrapper[4751]: I1123 04:13:36.500486 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.114773 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.115317 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.371265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerStarted","Data":"945e0a27c0d48a1613f6162a2d6368223bd930fa7e8345518653c1486ff8104b"} Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.372872 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ab61fde-edb6-4a1d-8a8f-597bc03064f2","Type":"ContainerStarted","Data":"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d"} Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.372975 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d" gracePeriod=30 Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.376971 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerStarted","Data":"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18"} Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.390787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerStarted","Data":"c1289cdccf1f784e239fab18d3f63225cdb8168dc74e57a342a58076eeda284f"} Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.391693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.395687 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.75540452 podStartE2EDuration="6.395673062s" podCreationTimestamp="2025-11-23 04:13:32 +0000 UTC" firstStartedPulling="2025-11-23 04:13:33.187411559 +0000 UTC m=+1109.381082908" lastFinishedPulling="2025-11-23 04:13:37.827680091 +0000 UTC m=+1114.021351450" observedRunningTime="2025-11-23 04:13:38.394396508 +0000 UTC m=+1114.588067867" watchObservedRunningTime="2025-11-23 04:13:38.395673062 +0000 UTC m=+1114.589344421" Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.402540 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc","Type":"ContainerStarted","Data":"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531"} Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.432659 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.602886871 podStartE2EDuration="7.432637077s" podCreationTimestamp="2025-11-23 04:13:31 +0000 UTC" firstStartedPulling="2025-11-23 04:13:32.0557357 +0000 UTC m=+1108.249407059" lastFinishedPulling="2025-11-23 04:13:37.885485906 +0000 UTC m=+1114.079157265" observedRunningTime="2025-11-23 04:13:38.413621346 +0000 UTC m=+1114.607292705" watchObservedRunningTime="2025-11-23 04:13:38.432637077 +0000 UTC m=+1114.626308436" Nov 23 04:13:38 crc kubenswrapper[4751]: I1123 04:13:38.456031 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.828842298 podStartE2EDuration="6.455995884s" podCreationTimestamp="2025-11-23 04:13:32 +0000 UTC" firstStartedPulling="2025-11-23 04:13:33.251797048 +0000 UTC m=+1109.445468407" lastFinishedPulling="2025-11-23 04:13:37.878950634 +0000 UTC m=+1114.072621993" observedRunningTime="2025-11-23 04:13:38.444044219 +0000 UTC m=+1114.637715578" watchObservedRunningTime="2025-11-23 04:13:38.455995884 +0000 UTC m=+1114.649667243" Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.414999 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerStarted","Data":"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2"} Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.415159 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-metadata" containerID="cri-o://19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" gracePeriod=30 Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.415147 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-log" containerID="cri-o://cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" gracePeriod=30 Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.420702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerStarted","Data":"beabbfa5d2b0a36fa7240feee04ffffebe8e760d96bc28ebb6b0a894364e2ef2"} Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.455501 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.86290972 podStartE2EDuration="7.455480973s" podCreationTimestamp="2025-11-23 04:13:32 +0000 UTC" firstStartedPulling="2025-11-23 04:13:33.286151065 +0000 UTC m=+1109.479822424" lastFinishedPulling="2025-11-23 04:13:37.878722318 +0000 UTC m=+1114.072393677" observedRunningTime="2025-11-23 04:13:39.447461271 +0000 UTC m=+1115.641132640" watchObservedRunningTime="2025-11-23 04:13:39.455480973 +0000 UTC m=+1115.649152342" Nov 23 04:13:39 crc kubenswrapper[4751]: I1123 04:13:39.489988 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.713124056 podStartE2EDuration="7.489965183s" podCreationTimestamp="2025-11-23 04:13:32 +0000 UTC" firstStartedPulling="2025-11-23 04:13:33.110740195 +0000 UTC m=+1109.304411554" lastFinishedPulling="2025-11-23 04:13:37.887581322 +0000 UTC m=+1114.081252681" observedRunningTime="2025-11-23 04:13:39.47392347 +0000 UTC m=+1115.667594849" watchObservedRunningTime="2025-11-23 04:13:39.489965183 +0000 UTC m=+1115.683636552" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.061233 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.199696 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phk9x\" (UniqueName: \"kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x\") pod \"53f8ac0f-8962-4005-adfd-98544abc7889\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.199801 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle\") pod \"53f8ac0f-8962-4005-adfd-98544abc7889\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.199833 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs\") pod \"53f8ac0f-8962-4005-adfd-98544abc7889\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.199885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data\") pod \"53f8ac0f-8962-4005-adfd-98544abc7889\" (UID: \"53f8ac0f-8962-4005-adfd-98544abc7889\") " Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.200790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs" (OuterVolumeSpecName: "logs") pod "53f8ac0f-8962-4005-adfd-98544abc7889" (UID: "53f8ac0f-8962-4005-adfd-98544abc7889"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.213530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x" (OuterVolumeSpecName: "kube-api-access-phk9x") pod "53f8ac0f-8962-4005-adfd-98544abc7889" (UID: "53f8ac0f-8962-4005-adfd-98544abc7889"). InnerVolumeSpecName "kube-api-access-phk9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.235606 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53f8ac0f-8962-4005-adfd-98544abc7889" (UID: "53f8ac0f-8962-4005-adfd-98544abc7889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.239444 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data" (OuterVolumeSpecName: "config-data") pod "53f8ac0f-8962-4005-adfd-98544abc7889" (UID: "53f8ac0f-8962-4005-adfd-98544abc7889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.301268 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phk9x\" (UniqueName: \"kubernetes.io/projected/53f8ac0f-8962-4005-adfd-98544abc7889-kube-api-access-phk9x\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.301309 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.301321 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53f8ac0f-8962-4005-adfd-98544abc7889-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.301329 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53f8ac0f-8962-4005-adfd-98544abc7889-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431512 4751 generic.go:334] "Generic (PLEG): container finished" podID="53f8ac0f-8962-4005-adfd-98544abc7889" containerID="19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" exitCode=0 Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431553 4751 generic.go:334] "Generic (PLEG): container finished" podID="53f8ac0f-8962-4005-adfd-98544abc7889" containerID="cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" exitCode=143 Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerDied","Data":"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2"} Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerDied","Data":"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18"} Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53f8ac0f-8962-4005-adfd-98544abc7889","Type":"ContainerDied","Data":"e8e3dfec01bd593399dce4c7f313328a1da61086d603457c89507b0d0d0fd6d4"} Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431627 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.431638 4751 scope.go:117] "RemoveContainer" containerID="19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.476136 4751 scope.go:117] "RemoveContainer" containerID="cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.481560 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.504951 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.508188 4751 scope.go:117] "RemoveContainer" containerID="19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" Nov 23 04:13:40 crc kubenswrapper[4751]: E1123 04:13:40.510688 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2\": container with ID starting with 19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2 not found: ID does not exist" containerID="19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.510731 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2"} err="failed to get container status \"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2\": rpc error: code = NotFound desc = could not find container \"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2\": container with ID starting with 19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2 not found: ID does not exist" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.510753 4751 scope.go:117] "RemoveContainer" containerID="cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" Nov 23 04:13:40 crc kubenswrapper[4751]: E1123 04:13:40.514586 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18\": container with ID starting with cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18 not found: ID does not exist" containerID="cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.514617 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18"} err="failed to get container status \"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18\": rpc error: code = NotFound desc = could not find container \"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18\": container with ID starting with cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18 not found: ID does not exist" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.514633 4751 scope.go:117] "RemoveContainer" containerID="19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.515456 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:40 crc kubenswrapper[4751]: E1123 04:13:40.515852 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-log" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.515868 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-log" Nov 23 04:13:40 crc kubenswrapper[4751]: E1123 04:13:40.515886 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-metadata" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.515892 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-metadata" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.516075 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-log" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.516106 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" containerName="nova-metadata-metadata" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.523167 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2"} err="failed to get container status \"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2\": rpc error: code = NotFound desc = could not find container \"19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2\": container with ID starting with 19262662ab3f59e322d958cbd0b7c6176cbba87d52c652f6916eda6f71a659b2 not found: ID does not exist" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.523209 4751 scope.go:117] "RemoveContainer" containerID="cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.524734 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18"} err="failed to get container status \"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18\": rpc error: code = NotFound desc = could not find container \"cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18\": container with ID starting with cb8d8a4408c9f48c78dfb5ef7a3be8d70fa1b6c96a315499918794a7bbbb3f18 not found: ID does not exist" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.541483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.541605 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.544752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.544987 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.659987 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f8ac0f-8962-4005-adfd-98544abc7889" path="/var/lib/kubelet/pods/53f8ac0f-8962-4005-adfd-98544abc7889/volumes" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.708091 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsj7s\" (UniqueName: \"kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.709138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.709254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.709327 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.709406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.813828 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.813907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.813994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.814064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.814145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsj7s\" (UniqueName: \"kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.815208 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.822141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.832008 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.833158 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsj7s\" (UniqueName: \"kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.836112 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " pod="openstack/nova-metadata-0" Nov 23 04:13:40 crc kubenswrapper[4751]: I1123 04:13:40.862670 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:41 crc kubenswrapper[4751]: I1123 04:13:41.445337 4751 generic.go:334] "Generic (PLEG): container finished" podID="a210f151-b9cb-46c5-8493-0e6b9629b117" containerID="1cf5c8a9a3b6863f35db543172a8eb9110d1dc866fdb98158681e857f373a699" exitCode=0 Nov 23 04:13:41 crc kubenswrapper[4751]: I1123 04:13:41.445471 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2c7qw" event={"ID":"a210f151-b9cb-46c5-8493-0e6b9629b117","Type":"ContainerDied","Data":"1cf5c8a9a3b6863f35db543172a8eb9110d1dc866fdb98158681e857f373a699"} Nov 23 04:13:41 crc kubenswrapper[4751]: I1123 04:13:41.474279 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:41 crc kubenswrapper[4751]: W1123 04:13:41.492910 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ade683_d65a_459d_bbec_a8f4872426d7.slice/crio-b5336ac657c44cb9e9e29beefabb90da9263b6904e7964ad88f1950688f6f868 WatchSource:0}: Error finding container b5336ac657c44cb9e9e29beefabb90da9263b6904e7964ad88f1950688f6f868: Status 404 returned error can't find the container with id b5336ac657c44cb9e9e29beefabb90da9263b6904e7964ad88f1950688f6f868 Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.467449 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerStarted","Data":"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77"} Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.467540 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerStarted","Data":"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421"} Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.467575 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerStarted","Data":"b5336ac657c44cb9e9e29beefabb90da9263b6904e7964ad88f1950688f6f868"} Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.489659 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.489635535 podStartE2EDuration="2.489635535s" podCreationTimestamp="2025-11-23 04:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:42.484233442 +0000 UTC m=+1118.677904821" watchObservedRunningTime="2025-11-23 04:13:42.489635535 +0000 UTC m=+1118.683306914" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.508933 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.508971 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.719655 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.719694 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.731732 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.756694 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.780505 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.867312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.867570 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="dnsmasq-dns" containerID="cri-o://38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce" gracePeriod=10 Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.893260 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.960673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts\") pod \"a210f151-b9cb-46c5-8493-0e6b9629b117\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.960715 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv4w8\" (UniqueName: \"kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8\") pod \"a210f151-b9cb-46c5-8493-0e6b9629b117\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.960843 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle\") pod \"a210f151-b9cb-46c5-8493-0e6b9629b117\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.960885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data\") pod \"a210f151-b9cb-46c5-8493-0e6b9629b117\" (UID: \"a210f151-b9cb-46c5-8493-0e6b9629b117\") " Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.966126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8" (OuterVolumeSpecName: "kube-api-access-jv4w8") pod "a210f151-b9cb-46c5-8493-0e6b9629b117" (UID: "a210f151-b9cb-46c5-8493-0e6b9629b117"). InnerVolumeSpecName "kube-api-access-jv4w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:42 crc kubenswrapper[4751]: I1123 04:13:42.978814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts" (OuterVolumeSpecName: "scripts") pod "a210f151-b9cb-46c5-8493-0e6b9629b117" (UID: "a210f151-b9cb-46c5-8493-0e6b9629b117"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.006964 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a210f151-b9cb-46c5-8493-0e6b9629b117" (UID: "a210f151-b9cb-46c5-8493-0e6b9629b117"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.023095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data" (OuterVolumeSpecName: "config-data") pod "a210f151-b9cb-46c5-8493-0e6b9629b117" (UID: "a210f151-b9cb-46c5-8493-0e6b9629b117"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.061986 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.062021 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv4w8\" (UniqueName: \"kubernetes.io/projected/a210f151-b9cb-46c5-8493-0e6b9629b117-kube-api-access-jv4w8\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.062034 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.062042 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210f151-b9cb-46c5-8493-0e6b9629b117-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.400488 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zxm\" (UniqueName: \"kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470405 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470544 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470640 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.470673 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0\") pod \"c0605451-7647-4783-8601-9662f5c14868\" (UID: \"c0605451-7647-4783-8601-9662f5c14868\") " Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.476299 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm" (OuterVolumeSpecName: "kube-api-access-z6zxm") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "kube-api-access-z6zxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.482684 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2c7qw" event={"ID":"a210f151-b9cb-46c5-8493-0e6b9629b117","Type":"ContainerDied","Data":"2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721"} Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.482734 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2369b4ec4a1e21ebc37d2e849364eaae43074d2058139aa350cef5ceeb6c4721" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.482817 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2c7qw" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.499730 4751 generic.go:334] "Generic (PLEG): container finished" podID="c0605451-7647-4783-8601-9662f5c14868" containerID="38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce" exitCode=0 Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.499852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" event={"ID":"c0605451-7647-4783-8601-9662f5c14868","Type":"ContainerDied","Data":"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce"} Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.499901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" event={"ID":"c0605451-7647-4783-8601-9662f5c14868","Type":"ContainerDied","Data":"ef1c143beee664876b2ed288c5359053928a5f5e1776a01542403dfaec87dbdd"} Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.499927 4751 scope.go:117] "RemoveContainer" containerID="38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.500062 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9scfw" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.523827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.529359 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.543881 4751 scope.go:117] "RemoveContainer" containerID="a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.548429 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config" (OuterVolumeSpecName: "config") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.555499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.571521 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.573591 4751 scope.go:117] "RemoveContainer" containerID="38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.574155 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.574179 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6zxm\" (UniqueName: \"kubernetes.io/projected/c0605451-7647-4783-8601-9662f5c14868-kube-api-access-z6zxm\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.574189 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.574197 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.574205 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: E1123 04:13:43.575209 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce\": container with ID starting with 38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce not found: ID does not exist" containerID="38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.575234 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce"} err="failed to get container status \"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce\": rpc error: code = NotFound desc = could not find container \"38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce\": container with ID starting with 38bb2aca055af748296fa7e3bdb985036b51e65e9ee617946f67fd741958d3ce not found: ID does not exist" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.575252 4751 scope.go:117] "RemoveContainer" containerID="a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54" Nov 23 04:13:43 crc kubenswrapper[4751]: E1123 04:13:43.575626 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54\": container with ID starting with a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54 not found: ID does not exist" containerID="a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.575666 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54"} err="failed to get container status \"a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54\": rpc error: code = NotFound desc = could not find container \"a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54\": container with ID starting with a52f69bd43630d2e83bc86960f4bcef9b134e2339ed84cbf181faccd2a42df54 not found: ID does not exist" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.589641 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0605451-7647-4783-8601-9662f5c14868" (UID: "c0605451-7647-4783-8601-9662f5c14868"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.591640 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.591640 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.676013 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0605451-7647-4783-8601-9662f5c14868-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.743712 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.743920 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-log" containerID="cri-o://945e0a27c0d48a1613f6162a2d6368223bd930fa7e8345518653c1486ff8104b" gracePeriod=30 Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.744026 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-api" containerID="cri-o://beabbfa5d2b0a36fa7240feee04ffffebe8e760d96bc28ebb6b0a894364e2ef2" gracePeriod=30 Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.768888 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.829805 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.838449 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9scfw"] Nov 23 04:13:43 crc kubenswrapper[4751]: I1123 04:13:43.949909 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.510241 4751 generic.go:334] "Generic (PLEG): container finished" podID="f577aed3-fab8-4a7e-9beb-4cb6d472530b" containerID="b732c3b3b170f5158fc963a3d9b0a0fcbe44c10c1d2d7a2497b4e3208d74b91f" exitCode=0 Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.510583 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8gf78" event={"ID":"f577aed3-fab8-4a7e-9beb-4cb6d472530b","Type":"ContainerDied","Data":"b732c3b3b170f5158fc963a3d9b0a0fcbe44c10c1d2d7a2497b4e3208d74b91f"} Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.513632 4751 generic.go:334] "Generic (PLEG): container finished" podID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerID="945e0a27c0d48a1613f6162a2d6368223bd930fa7e8345518653c1486ff8104b" exitCode=143 Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.513665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerDied","Data":"945e0a27c0d48a1613f6162a2d6368223bd930fa7e8345518653c1486ff8104b"} Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.514035 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-log" containerID="cri-o://0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" gracePeriod=30 Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.514032 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-metadata" containerID="cri-o://ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" gracePeriod=30 Nov 23 04:13:44 crc kubenswrapper[4751]: I1123 04:13:44.655692 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0605451-7647-4783-8601-9662f5c14868" path="/var/lib/kubelet/pods/c0605451-7647-4783-8601-9662f5c14868/volumes" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.052666 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle\") pod \"97ade683-d65a-459d-bbec-a8f4872426d7\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202354 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs\") pod \"97ade683-d65a-459d-bbec-a8f4872426d7\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202390 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsj7s\" (UniqueName: \"kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s\") pod \"97ade683-d65a-459d-bbec-a8f4872426d7\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202461 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data\") pod \"97ade683-d65a-459d-bbec-a8f4872426d7\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202533 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs\") pod \"97ade683-d65a-459d-bbec-a8f4872426d7\" (UID: \"97ade683-d65a-459d-bbec-a8f4872426d7\") " Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.202683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs" (OuterVolumeSpecName: "logs") pod "97ade683-d65a-459d-bbec-a8f4872426d7" (UID: "97ade683-d65a-459d-bbec-a8f4872426d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.203524 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97ade683-d65a-459d-bbec-a8f4872426d7-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.215579 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s" (OuterVolumeSpecName: "kube-api-access-wsj7s") pod "97ade683-d65a-459d-bbec-a8f4872426d7" (UID: "97ade683-d65a-459d-bbec-a8f4872426d7"). InnerVolumeSpecName "kube-api-access-wsj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.244143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97ade683-d65a-459d-bbec-a8f4872426d7" (UID: "97ade683-d65a-459d-bbec-a8f4872426d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.274878 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data" (OuterVolumeSpecName: "config-data") pod "97ade683-d65a-459d-bbec-a8f4872426d7" (UID: "97ade683-d65a-459d-bbec-a8f4872426d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.281454 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "97ade683-d65a-459d-bbec-a8f4872426d7" (UID: "97ade683-d65a-459d-bbec-a8f4872426d7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.304689 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsj7s\" (UniqueName: \"kubernetes.io/projected/97ade683-d65a-459d-bbec-a8f4872426d7-kube-api-access-wsj7s\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.304997 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.305008 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.305017 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ade683-d65a-459d-bbec-a8f4872426d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522540 4751 generic.go:334] "Generic (PLEG): container finished" podID="97ade683-d65a-459d-bbec-a8f4872426d7" containerID="ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" exitCode=0 Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522569 4751 generic.go:334] "Generic (PLEG): container finished" podID="97ade683-d65a-459d-bbec-a8f4872426d7" containerID="0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" exitCode=143 Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522601 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerDied","Data":"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77"} Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522698 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerDied","Data":"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421"} Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97ade683-d65a-459d-bbec-a8f4872426d7","Type":"ContainerDied","Data":"b5336ac657c44cb9e9e29beefabb90da9263b6904e7964ad88f1950688f6f868"} Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.522726 4751 scope.go:117] "RemoveContainer" containerID="ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.523017 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerName="nova-scheduler-scheduler" containerID="cri-o://1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" gracePeriod=30 Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.550941 4751 scope.go:117] "RemoveContainer" containerID="0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.561540 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.572860 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584184 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.584651 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-log" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584670 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-log" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.584682 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="init" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584693 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="init" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.584721 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210f151-b9cb-46c5-8493-0e6b9629b117" containerName="nova-manage" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584729 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210f151-b9cb-46c5-8493-0e6b9629b117" containerName="nova-manage" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.584742 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="dnsmasq-dns" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584748 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="dnsmasq-dns" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.584771 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-metadata" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584782 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-metadata" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.584992 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0605451-7647-4783-8601-9662f5c14868" containerName="dnsmasq-dns" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.585023 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-metadata" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.585039 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" containerName="nova-metadata-log" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.585051 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210f151-b9cb-46c5-8493-0e6b9629b117" containerName="nova-manage" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.586123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.589203 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.589387 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.597493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.607592 4751 scope.go:117] "RemoveContainer" containerID="ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.608036 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77\": container with ID starting with ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77 not found: ID does not exist" containerID="ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.608077 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77"} err="failed to get container status \"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77\": rpc error: code = NotFound desc = could not find container \"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77\": container with ID starting with ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77 not found: ID does not exist" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.608104 4751 scope.go:117] "RemoveContainer" containerID="0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" Nov 23 04:13:45 crc kubenswrapper[4751]: E1123 04:13:45.609570 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421\": container with ID starting with 0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421 not found: ID does not exist" containerID="0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.609621 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421"} err="failed to get container status \"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421\": rpc error: code = NotFound desc = could not find container \"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421\": container with ID starting with 0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421 not found: ID does not exist" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.609649 4751 scope.go:117] "RemoveContainer" containerID="ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.610473 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77"} err="failed to get container status \"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77\": rpc error: code = NotFound desc = could not find container \"ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77\": container with ID starting with ca067996dd5d5b1932e4c62bfffec6246e4fe64f0e881b15d7998a569423da77 not found: ID does not exist" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.610494 4751 scope.go:117] "RemoveContainer" containerID="0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.610689 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421"} err="failed to get container status \"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421\": rpc error: code = NotFound desc = could not find container \"0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421\": container with ID starting with 0832a1663f46d36f40a2109ff7786e1224ba9cf9646025229f39e1d016af3421 not found: ID does not exist" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.713158 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.713208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.713562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjpnz\" (UniqueName: \"kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.713607 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.713649 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.815103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.815151 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.815212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjpnz\" (UniqueName: \"kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.815244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.815276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.816016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.820155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.820794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.822022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.831203 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjpnz\" (UniqueName: \"kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz\") pod \"nova-metadata-0\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.904019 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:13:45 crc kubenswrapper[4751]: I1123 04:13:45.906653 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.017796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts\") pod \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.018230 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data\") pod \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.018313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89794\" (UniqueName: \"kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794\") pod \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.018427 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle\") pod \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\" (UID: \"f577aed3-fab8-4a7e-9beb-4cb6d472530b\") " Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.022313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts" (OuterVolumeSpecName: "scripts") pod "f577aed3-fab8-4a7e-9beb-4cb6d472530b" (UID: "f577aed3-fab8-4a7e-9beb-4cb6d472530b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.026957 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794" (OuterVolumeSpecName: "kube-api-access-89794") pod "f577aed3-fab8-4a7e-9beb-4cb6d472530b" (UID: "f577aed3-fab8-4a7e-9beb-4cb6d472530b"). InnerVolumeSpecName "kube-api-access-89794". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.050470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f577aed3-fab8-4a7e-9beb-4cb6d472530b" (UID: "f577aed3-fab8-4a7e-9beb-4cb6d472530b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.061000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data" (OuterVolumeSpecName: "config-data") pod "f577aed3-fab8-4a7e-9beb-4cb6d472530b" (UID: "f577aed3-fab8-4a7e-9beb-4cb6d472530b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.133555 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.133591 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.133602 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89794\" (UniqueName: \"kubernetes.io/projected/f577aed3-fab8-4a7e-9beb-4cb6d472530b-kube-api-access-89794\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.133616 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f577aed3-fab8-4a7e-9beb-4cb6d472530b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:46 crc kubenswrapper[4751]: W1123 04:13:46.343014 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97400baf_e157_4da0_a69f_fe052eb9ec7c.slice/crio-fe3ddaec94c3c857837a3580905e432155d3ac68639b5c5f8609cd74a8b35e2e WatchSource:0}: Error finding container fe3ddaec94c3c857837a3580905e432155d3ac68639b5c5f8609cd74a8b35e2e: Status 404 returned error can't find the container with id fe3ddaec94c3c857837a3580905e432155d3ac68639b5c5f8609cd74a8b35e2e Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.348981 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.543852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerStarted","Data":"fe3ddaec94c3c857837a3580905e432155d3ac68639b5c5f8609cd74a8b35e2e"} Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.552460 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-8gf78" event={"ID":"f577aed3-fab8-4a7e-9beb-4cb6d472530b","Type":"ContainerDied","Data":"8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e"} Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.552530 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e53e62c1978c234543bd13e0a28160b88eb4e333f99cd1cc38754885008ed3e" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.552697 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-8gf78" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.711104 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ade683-d65a-459d-bbec-a8f4872426d7" path="/var/lib/kubelet/pods/97ade683-d65a-459d-bbec-a8f4872426d7/volumes" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.711955 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 04:13:46 crc kubenswrapper[4751]: E1123 04:13:46.712338 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f577aed3-fab8-4a7e-9beb-4cb6d472530b" containerName="nova-cell1-conductor-db-sync" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.712372 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f577aed3-fab8-4a7e-9beb-4cb6d472530b" containerName="nova-cell1-conductor-db-sync" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.712588 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f577aed3-fab8-4a7e-9beb-4cb6d472530b" containerName="nova-cell1-conductor-db-sync" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.713333 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.713663 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.716130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.845634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.845704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkht\" (UniqueName: \"kubernetes.io/projected/c053622f-478e-4de7-9a6b-43c86c5ada7b-kube-api-access-dnkht\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.845761 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.946760 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkht\" (UniqueName: \"kubernetes.io/projected/c053622f-478e-4de7-9a6b-43c86c5ada7b-kube-api-access-dnkht\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.947047 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.947233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.950991 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.951438 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c053622f-478e-4de7-9a6b-43c86c5ada7b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:46 crc kubenswrapper[4751]: I1123 04:13:46.967297 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkht\" (UniqueName: \"kubernetes.io/projected/c053622f-478e-4de7-9a6b-43c86c5ada7b-kube-api-access-dnkht\") pod \"nova-cell1-conductor-0\" (UID: \"c053622f-478e-4de7-9a6b-43c86c5ada7b\") " pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.041463 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.520792 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.565714 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerStarted","Data":"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a"} Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.565773 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerStarted","Data":"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608"} Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.568806 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c053622f-478e-4de7-9a6b-43c86c5ada7b","Type":"ContainerStarted","Data":"1ee7c2ef4687817a5549f9c3f0aa613a3c0dd06cdc9a9eca13038bcf663bd846"} Nov 23 04:13:47 crc kubenswrapper[4751]: I1123 04:13:47.588833 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.584431202 podStartE2EDuration="2.584431202s" podCreationTimestamp="2025-11-23 04:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:47.584064863 +0000 UTC m=+1123.777736242" watchObservedRunningTime="2025-11-23 04:13:47.584431202 +0000 UTC m=+1123.778102561" Nov 23 04:13:47 crc kubenswrapper[4751]: E1123 04:13:47.720826 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:13:47 crc kubenswrapper[4751]: E1123 04:13:47.723227 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:13:47 crc kubenswrapper[4751]: E1123 04:13:47.726286 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:13:47 crc kubenswrapper[4751]: E1123 04:13:47.726437 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerName="nova-scheduler-scheduler" Nov 23 04:13:48 crc kubenswrapper[4751]: I1123 04:13:48.579241 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c053622f-478e-4de7-9a6b-43c86c5ada7b","Type":"ContainerStarted","Data":"9df0a785f1fbc4d47793a79c6349d356de9e6c2aa310d28d5d84a05c8e12f9dd"} Nov 23 04:13:48 crc kubenswrapper[4751]: I1123 04:13:48.597779 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.597759478 podStartE2EDuration="2.597759478s" podCreationTimestamp="2025-11-23 04:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:48.593162326 +0000 UTC m=+1124.786833685" watchObservedRunningTime="2025-11-23 04:13:48.597759478 +0000 UTC m=+1124.791430837" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.592847 4751 generic.go:334] "Generic (PLEG): container finished" podID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerID="beabbfa5d2b0a36fa7240feee04ffffebe8e760d96bc28ebb6b0a894364e2ef2" exitCode=0 Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.592921 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerDied","Data":"beabbfa5d2b0a36fa7240feee04ffffebe8e760d96bc28ebb6b0a894364e2ef2"} Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.593202 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.689791 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.796756 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle\") pod \"04b4514d-f91c-4650-b1ef-9536c263a2f9\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.796827 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs\") pod \"04b4514d-f91c-4650-b1ef-9536c263a2f9\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.796998 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jwn\" (UniqueName: \"kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn\") pod \"04b4514d-f91c-4650-b1ef-9536c263a2f9\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.797077 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data\") pod \"04b4514d-f91c-4650-b1ef-9536c263a2f9\" (UID: \"04b4514d-f91c-4650-b1ef-9536c263a2f9\") " Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.797201 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs" (OuterVolumeSpecName: "logs") pod "04b4514d-f91c-4650-b1ef-9536c263a2f9" (UID: "04b4514d-f91c-4650-b1ef-9536c263a2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.797631 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b4514d-f91c-4650-b1ef-9536c263a2f9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.802129 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn" (OuterVolumeSpecName: "kube-api-access-c8jwn") pod "04b4514d-f91c-4650-b1ef-9536c263a2f9" (UID: "04b4514d-f91c-4650-b1ef-9536c263a2f9"). InnerVolumeSpecName "kube-api-access-c8jwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.823902 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data" (OuterVolumeSpecName: "config-data") pod "04b4514d-f91c-4650-b1ef-9536c263a2f9" (UID: "04b4514d-f91c-4650-b1ef-9536c263a2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.836752 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b4514d-f91c-4650-b1ef-9536c263a2f9" (UID: "04b4514d-f91c-4650-b1ef-9536c263a2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.900814 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jwn\" (UniqueName: \"kubernetes.io/projected/04b4514d-f91c-4650-b1ef-9536c263a2f9-kube-api-access-c8jwn\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.900854 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:49 crc kubenswrapper[4751]: I1123 04:13:49.900866 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4514d-f91c-4650-b1ef-9536c263a2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.349151 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.513902 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle\") pod \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.514029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data\") pod \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.514192 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgww4\" (UniqueName: \"kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4\") pod \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\" (UID: \"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc\") " Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.519575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4" (OuterVolumeSpecName: "kube-api-access-kgww4") pod "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" (UID: "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc"). InnerVolumeSpecName "kube-api-access-kgww4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.545076 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data" (OuterVolumeSpecName: "config-data") pod "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" (UID: "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.555667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" (UID: "1e1f5fca-a967-4b29-b4f2-6033bde6f7bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.604328 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" exitCode=0 Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.604404 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.605704 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc","Type":"ContainerDied","Data":"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531"} Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.605967 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1e1f5fca-a967-4b29-b4f2-6033bde6f7bc","Type":"ContainerDied","Data":"4e66e1dfbe1f2a2a6199dc9a41880a097fd5a4f083a383bc7f3081abd1f49072"} Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.606009 4751 scope.go:117] "RemoveContainer" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.607048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04b4514d-f91c-4650-b1ef-9536c263a2f9","Type":"ContainerDied","Data":"0b34df10e8bf7ec5a3741268e0067fd2d247efefe1cca4d9e6cc6c92dfe8f1d8"} Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.607286 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.615935 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgww4\" (UniqueName: \"kubernetes.io/projected/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-kube-api-access-kgww4\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.615977 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.615990 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.643505 4751 scope.go:117] "RemoveContainer" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" Nov 23 04:13:50 crc kubenswrapper[4751]: E1123 04:13:50.645008 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531\": container with ID starting with 1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531 not found: ID does not exist" containerID="1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.645047 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531"} err="failed to get container status \"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531\": rpc error: code = NotFound desc = could not find container \"1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531\": container with ID starting with 1975f0826c6dc9a1a02a65b4fdfb70ca24e7bfef2ef59457323258a5411b0531 not found: ID does not exist" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.645066 4751 scope.go:117] "RemoveContainer" containerID="beabbfa5d2b0a36fa7240feee04ffffebe8e760d96bc28ebb6b0a894364e2ef2" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.679693 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.689587 4751 scope.go:117] "RemoveContainer" containerID="945e0a27c0d48a1613f6162a2d6368223bd930fa7e8345518653c1486ff8104b" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.704002 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.719053 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.729759 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: E1123 04:13:50.730266 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-api" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730291 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-api" Nov 23 04:13:50 crc kubenswrapper[4751]: E1123 04:13:50.730315 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-log" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730324 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-log" Nov 23 04:13:50 crc kubenswrapper[4751]: E1123 04:13:50.730357 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerName="nova-scheduler-scheduler" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730366 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerName="nova-scheduler-scheduler" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730584 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-api" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730605 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" containerName="nova-scheduler-scheduler" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.730624 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" containerName="nova-api-log" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.731269 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.736544 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.742226 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.757477 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.760391 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.762188 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.767929 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.768188 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.818967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.819168 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhfn\" (UniqueName: \"kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.819601 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.904822 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.904908 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhfn\" (UniqueName: \"kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921699 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.921771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs9dk\" (UniqueName: \"kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.938795 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.939732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:50 crc kubenswrapper[4751]: I1123 04:13:50.949756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhfn\" (UniqueName: \"kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn\") pod \"nova-scheduler-0\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " pod="openstack/nova-scheduler-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.023871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs9dk\" (UniqueName: \"kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.023916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.023955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.024114 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.024848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.030193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.030732 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.042813 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs9dk\" (UniqueName: \"kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk\") pod \"nova-api-0\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.051021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.078564 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.529920 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:13:51 crc kubenswrapper[4751]: W1123 04:13:51.539314 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fec9886_9b0e_4458_a67f_afbe529fa2c3.slice/crio-f8ae5e8e6f653da1866049d257cf6f8a564484075c1c9116734664e4770bb134 WatchSource:0}: Error finding container f8ae5e8e6f653da1866049d257cf6f8a564484075c1c9116734664e4770bb134: Status 404 returned error can't find the container with id f8ae5e8e6f653da1866049d257cf6f8a564484075c1c9116734664e4770bb134 Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.617219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fec9886-9b0e-4458-a67f-afbe529fa2c3","Type":"ContainerStarted","Data":"f8ae5e8e6f653da1866049d257cf6f8a564484075c1c9116734664e4770bb134"} Nov 23 04:13:51 crc kubenswrapper[4751]: W1123 04:13:51.645565 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8246e263_9fb1_4b28_aff3_0f02e364748d.slice/crio-82606bece04a5309821fdedcf20ac1223d87af4dcda18885f0b080e1fd5717d8 WatchSource:0}: Error finding container 82606bece04a5309821fdedcf20ac1223d87af4dcda18885f0b080e1fd5717d8: Status 404 returned error can't find the container with id 82606bece04a5309821fdedcf20ac1223d87af4dcda18885f0b080e1fd5717d8 Nov 23 04:13:51 crc kubenswrapper[4751]: I1123 04:13:51.647115 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.067991 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.640753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fec9886-9b0e-4458-a67f-afbe529fa2c3","Type":"ContainerStarted","Data":"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563"} Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.670937 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b4514d-f91c-4650-b1ef-9536c263a2f9" path="/var/lib/kubelet/pods/04b4514d-f91c-4650-b1ef-9536c263a2f9/volumes" Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.671047 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.671020843 podStartE2EDuration="2.671020843s" podCreationTimestamp="2025-11-23 04:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:52.660520946 +0000 UTC m=+1128.854192315" watchObservedRunningTime="2025-11-23 04:13:52.671020843 +0000 UTC m=+1128.864692212" Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.671734 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e1f5fca-a967-4b29-b4f2-6033bde6f7bc" path="/var/lib/kubelet/pods/1e1f5fca-a967-4b29-b4f2-6033bde6f7bc/volumes" Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.672317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerStarted","Data":"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d"} Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.672378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerStarted","Data":"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08"} Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.672393 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerStarted","Data":"82606bece04a5309821fdedcf20ac1223d87af4dcda18885f0b080e1fd5717d8"} Nov 23 04:13:52 crc kubenswrapper[4751]: I1123 04:13:52.688081 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.688050703 podStartE2EDuration="2.688050703s" podCreationTimestamp="2025-11-23 04:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:13:52.68453501 +0000 UTC m=+1128.878206379" watchObservedRunningTime="2025-11-23 04:13:52.688050703 +0000 UTC m=+1128.881722092" Nov 23 04:13:55 crc kubenswrapper[4751]: I1123 04:13:55.904620 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 04:13:55 crc kubenswrapper[4751]: I1123 04:13:55.904927 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 04:13:56 crc kubenswrapper[4751]: I1123 04:13:56.051726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 04:13:56 crc kubenswrapper[4751]: I1123 04:13:56.925609 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:13:56 crc kubenswrapper[4751]: I1123 04:13:56.925612 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.052034 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.079245 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.079589 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.102692 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.531235 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 04:14:01 crc kubenswrapper[4751]: I1123 04:14:01.813473 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 04:14:02 crc kubenswrapper[4751]: I1123 04:14:02.162548 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:02 crc kubenswrapper[4751]: I1123 04:14:02.162608 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.275802 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.276422 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="82dca28c-075e-461a-9404-8298cce5588d" containerName="kube-state-metrics" containerID="cri-o://707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8" gracePeriod=30 Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.486704 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="82dca28c-075e-461a-9404-8298cce5588d" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": dial tcp 10.217.0.105:8081: connect: connection refused" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.797089 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.810873 4751 generic.go:334] "Generic (PLEG): container finished" podID="82dca28c-075e-461a-9404-8298cce5588d" containerID="707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8" exitCode=2 Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.810907 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.810925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"82dca28c-075e-461a-9404-8298cce5588d","Type":"ContainerDied","Data":"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8"} Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.811392 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"82dca28c-075e-461a-9404-8298cce5588d","Type":"ContainerDied","Data":"43f4ddf0b3f817208f4b3bb31bb765967eaf40f62fe36151e5c54f48548b0af3"} Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.811419 4751 scope.go:117] "RemoveContainer" containerID="707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.841931 4751 scope.go:117] "RemoveContainer" containerID="707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8" Nov 23 04:14:05 crc kubenswrapper[4751]: E1123 04:14:05.842313 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8\": container with ID starting with 707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8 not found: ID does not exist" containerID="707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.842359 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8"} err="failed to get container status \"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8\": rpc error: code = NotFound desc = could not find container \"707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8\": container with ID starting with 707d772a50dc971011661836992b329ee0887caab74e48401f3d79dfd22e9dc8 not found: ID does not exist" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.909211 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.913210 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.913470 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.924108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrxsb\" (UniqueName: \"kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb\") pod \"82dca28c-075e-461a-9404-8298cce5588d\" (UID: \"82dca28c-075e-461a-9404-8298cce5588d\") " Nov 23 04:14:05 crc kubenswrapper[4751]: I1123 04:14:05.929095 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb" (OuterVolumeSpecName: "kube-api-access-nrxsb") pod "82dca28c-075e-461a-9404-8298cce5588d" (UID: "82dca28c-075e-461a-9404-8298cce5588d"). InnerVolumeSpecName "kube-api-access-nrxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.026811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrxsb\" (UniqueName: \"kubernetes.io/projected/82dca28c-075e-461a-9404-8298cce5588d-kube-api-access-nrxsb\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.171628 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.186828 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.199803 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:06 crc kubenswrapper[4751]: E1123 04:14:06.200490 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82dca28c-075e-461a-9404-8298cce5588d" containerName="kube-state-metrics" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.200610 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="82dca28c-075e-461a-9404-8298cce5588d" containerName="kube-state-metrics" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.200886 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="82dca28c-075e-461a-9404-8298cce5588d" containerName="kube-state-metrics" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.201617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.203626 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.203739 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.209530 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.333773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.333865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.333903 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9rr\" (UniqueName: \"kubernetes.io/projected/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-api-access-7b9rr\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.334027 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.435685 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.435989 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.436069 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9rr\" (UniqueName: \"kubernetes.io/projected/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-api-access-7b9rr\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.436182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.440393 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.440910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.442310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edbd1755-69d6-4ae1-809a-a64203c0c090-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.454780 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9rr\" (UniqueName: \"kubernetes.io/projected/edbd1755-69d6-4ae1-809a-a64203c0c090-kube-api-access-7b9rr\") pod \"kube-state-metrics-0\" (UID: \"edbd1755-69d6-4ae1-809a-a64203c0c090\") " pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.521223 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.656888 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82dca28c-075e-461a-9404-8298cce5588d" path="/var/lib/kubelet/pods/82dca28c-075e-461a-9404-8298cce5588d/volumes" Nov 23 04:14:06 crc kubenswrapper[4751]: I1123 04:14:06.829816 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.024718 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.110359 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.110890 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-central-agent" containerID="cri-o://57a058947e596dffbfa59aa761c4ccba6320eb93497f2ec7e146bd68f9fb9188" gracePeriod=30 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.111065 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="proxy-httpd" containerID="cri-o://c1289cdccf1f784e239fab18d3f63225cdb8168dc74e57a342a58076eeda284f" gracePeriod=30 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.111246 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-notification-agent" containerID="cri-o://225ddfaa6053f9b9c41c0be81e9d9497177c419bb2d8d4b152150a8d52d19111" gracePeriod=30 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.111297 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="sg-core" containerID="cri-o://1373e3b47122d74d50d20c7e031e1c4731a7c5e6437cdb6f5505e20f30e89f14" gracePeriod=30 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837324 4751 generic.go:334] "Generic (PLEG): container finished" podID="968acafd-d3ef-457c-853a-4dec707be397" containerID="c1289cdccf1f784e239fab18d3f63225cdb8168dc74e57a342a58076eeda284f" exitCode=0 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837620 4751 generic.go:334] "Generic (PLEG): container finished" podID="968acafd-d3ef-457c-853a-4dec707be397" containerID="1373e3b47122d74d50d20c7e031e1c4731a7c5e6437cdb6f5505e20f30e89f14" exitCode=2 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837633 4751 generic.go:334] "Generic (PLEG): container finished" podID="968acafd-d3ef-457c-853a-4dec707be397" containerID="57a058947e596dffbfa59aa761c4ccba6320eb93497f2ec7e146bd68f9fb9188" exitCode=0 Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837414 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerDied","Data":"c1289cdccf1f784e239fab18d3f63225cdb8168dc74e57a342a58076eeda284f"} Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerDied","Data":"1373e3b47122d74d50d20c7e031e1c4731a7c5e6437cdb6f5505e20f30e89f14"} Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.837733 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerDied","Data":"57a058947e596dffbfa59aa761c4ccba6320eb93497f2ec7e146bd68f9fb9188"} Nov 23 04:14:07 crc kubenswrapper[4751]: I1123 04:14:07.839521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbd1755-69d6-4ae1-809a-a64203c0c090","Type":"ContainerStarted","Data":"83111328c8fb1a130f5b37a8b7b426d993765f62b6f4b585333489da95e9a228"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.115775 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.115830 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.115875 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.116590 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.116650 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e" gracePeriod=600 Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.841100 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.848968 4751 generic.go:334] "Generic (PLEG): container finished" podID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" containerID="a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d" exitCode=137 Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.849037 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.849045 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ab61fde-edb6-4a1d-8a8f-597bc03064f2","Type":"ContainerDied","Data":"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.849231 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ab61fde-edb6-4a1d-8a8f-597bc03064f2","Type":"ContainerDied","Data":"05b24ad5f981f864c7708a9a953b620757e4741440f68646299f7e6d8ec46ac9"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.849279 4751 scope.go:117] "RemoveContainer" containerID="a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.855640 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e" exitCode=0 Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.855703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.855764 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.858136 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"edbd1755-69d6-4ae1-809a-a64203c0c090","Type":"ContainerStarted","Data":"5d287ae8c3faf41f99675eeabcf7e34b8b96e6f809d97731c77a68287eebb717"} Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.858224 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.877583 4751 scope.go:117] "RemoveContainer" containerID="a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.886237 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.28248467 podStartE2EDuration="2.886217165s" podCreationTimestamp="2025-11-23 04:14:06 +0000 UTC" firstStartedPulling="2025-11-23 04:14:07.028750701 +0000 UTC m=+1143.222422060" lastFinishedPulling="2025-11-23 04:14:07.632483166 +0000 UTC m=+1143.826154555" observedRunningTime="2025-11-23 04:14:08.879488567 +0000 UTC m=+1145.073159916" watchObservedRunningTime="2025-11-23 04:14:08.886217165 +0000 UTC m=+1145.079888524" Nov 23 04:14:08 crc kubenswrapper[4751]: E1123 04:14:08.887204 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d\": container with ID starting with a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d not found: ID does not exist" containerID="a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.887279 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d"} err="failed to get container status \"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d\": rpc error: code = NotFound desc = could not find container \"a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d\": container with ID starting with a935cb721b15b51bf31a45e9930c9016cd25acabde017aad99fc2e54c323889d not found: ID does not exist" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.887330 4751 scope.go:117] "RemoveContainer" containerID="92b37deee194d835919b13e465ed8d01f88734ed8d61a3352e53915d15308b01" Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.982016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data\") pod \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.982169 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle\") pod \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.982371 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnhnb\" (UniqueName: \"kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb\") pod \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\" (UID: \"7ab61fde-edb6-4a1d-8a8f-597bc03064f2\") " Nov 23 04:14:08 crc kubenswrapper[4751]: I1123 04:14:08.997731 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb" (OuterVolumeSpecName: "kube-api-access-mnhnb") pod "7ab61fde-edb6-4a1d-8a8f-597bc03064f2" (UID: "7ab61fde-edb6-4a1d-8a8f-597bc03064f2"). InnerVolumeSpecName "kube-api-access-mnhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.017333 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab61fde-edb6-4a1d-8a8f-597bc03064f2" (UID: "7ab61fde-edb6-4a1d-8a8f-597bc03064f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.027053 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data" (OuterVolumeSpecName: "config-data") pod "7ab61fde-edb6-4a1d-8a8f-597bc03064f2" (UID: "7ab61fde-edb6-4a1d-8a8f-597bc03064f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.084403 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnhnb\" (UniqueName: \"kubernetes.io/projected/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-kube-api-access-mnhnb\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.084695 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.084709 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab61fde-edb6-4a1d-8a8f-597bc03064f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.186979 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.194856 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.205881 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:14:09 crc kubenswrapper[4751]: E1123 04:14:09.206293 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.206318 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.207423 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.208021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.210491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.210563 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.210731 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.217086 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.287295 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.287378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s66l\" (UniqueName: \"kubernetes.io/projected/c7b90669-75e1-4f89-860f-7dfa61d6fa48-kube-api-access-2s66l\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.287431 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.287527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.287556 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.389548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.389812 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.389959 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.390048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s66l\" (UniqueName: \"kubernetes.io/projected/c7b90669-75e1-4f89-860f-7dfa61d6fa48-kube-api-access-2s66l\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.390172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.395074 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.396175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.396420 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.396865 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b90669-75e1-4f89-860f-7dfa61d6fa48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.408212 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s66l\" (UniqueName: \"kubernetes.io/projected/c7b90669-75e1-4f89-860f-7dfa61d6fa48-kube-api-access-2s66l\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b90669-75e1-4f89-860f-7dfa61d6fa48\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:09 crc kubenswrapper[4751]: I1123 04:14:09.539569 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.090008 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 04:14:10 crc kubenswrapper[4751]: W1123 04:14:10.093026 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b90669_75e1_4f89_860f_7dfa61d6fa48.slice/crio-50644291d6a4cf13151068ccd6238bdd0bcffe48428b1c8d79462125f09a32cd WatchSource:0}: Error finding container 50644291d6a4cf13151068ccd6238bdd0bcffe48428b1c8d79462125f09a32cd: Status 404 returned error can't find the container with id 50644291d6a4cf13151068ccd6238bdd0bcffe48428b1c8d79462125f09a32cd Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.655982 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab61fde-edb6-4a1d-8a8f-597bc03064f2" path="/var/lib/kubelet/pods/7ab61fde-edb6-4a1d-8a8f-597bc03064f2/volumes" Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.886400 4751 generic.go:334] "Generic (PLEG): container finished" podID="968acafd-d3ef-457c-853a-4dec707be397" containerID="225ddfaa6053f9b9c41c0be81e9d9497177c419bb2d8d4b152150a8d52d19111" exitCode=0 Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.886736 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerDied","Data":"225ddfaa6053f9b9c41c0be81e9d9497177c419bb2d8d4b152150a8d52d19111"} Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.888917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7b90669-75e1-4f89-860f-7dfa61d6fa48","Type":"ContainerStarted","Data":"df3d1be520bc154913a81f783d69d666d799649a4181aef38b9de90b9df27b9c"} Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.888941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7b90669-75e1-4f89-860f-7dfa61d6fa48","Type":"ContainerStarted","Data":"50644291d6a4cf13151068ccd6238bdd0bcffe48428b1c8d79462125f09a32cd"} Nov 23 04:14:10 crc kubenswrapper[4751]: I1123 04:14:10.923411 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.923390313 podStartE2EDuration="1.923390313s" podCreationTimestamp="2025-11-23 04:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:10.913934053 +0000 UTC m=+1147.107605412" watchObservedRunningTime="2025-11-23 04:14:10.923390313 +0000 UTC m=+1147.117061672" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.086698 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.087781 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.088717 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.091851 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.124515 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.236981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237172 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237241 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.237421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qsxs\" (UniqueName: \"kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs\") pod \"968acafd-d3ef-457c-853a-4dec707be397\" (UID: \"968acafd-d3ef-457c-853a-4dec707be397\") " Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.238019 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.238190 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.249550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts" (OuterVolumeSpecName: "scripts") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.249568 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs" (OuterVolumeSpecName: "kube-api-access-8qsxs") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "kube-api-access-8qsxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.275100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.327894 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339716 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339816 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339832 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339845 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qsxs\" (UniqueName: \"kubernetes.io/projected/968acafd-d3ef-457c-853a-4dec707be397-kube-api-access-8qsxs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339856 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.339866 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/968acafd-d3ef-457c-853a-4dec707be397-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.340884 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data" (OuterVolumeSpecName: "config-data") pod "968acafd-d3ef-457c-853a-4dec707be397" (UID: "968acafd-d3ef-457c-853a-4dec707be397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.441535 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968acafd-d3ef-457c-853a-4dec707be397-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.903222 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.909493 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"968acafd-d3ef-457c-853a-4dec707be397","Type":"ContainerDied","Data":"1ae8ed0752933db858fb90e36bc1479aaa7fef9cfcacc3ce6c85d4b5dbe3dd48"} Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.909898 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.910121 4751 scope.go:117] "RemoveContainer" containerID="c1289cdccf1f784e239fab18d3f63225cdb8168dc74e57a342a58076eeda284f" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.914810 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.949583 4751 scope.go:117] "RemoveContainer" containerID="1373e3b47122d74d50d20c7e031e1c4731a7c5e6437cdb6f5505e20f30e89f14" Nov 23 04:14:11 crc kubenswrapper[4751]: I1123 04:14:11.979477 4751 scope.go:117] "RemoveContainer" containerID="225ddfaa6053f9b9c41c0be81e9d9497177c419bb2d8d4b152150a8d52d19111" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.005548 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.020054 4751 scope.go:117] "RemoveContainer" containerID="57a058947e596dffbfa59aa761c4ccba6320eb93497f2ec7e146bd68f9fb9188" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.030799 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.049283 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:12 crc kubenswrapper[4751]: E1123 04:14:12.049923 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="proxy-httpd" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.049943 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="proxy-httpd" Nov 23 04:14:12 crc kubenswrapper[4751]: E1123 04:14:12.049985 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-central-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050000 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-central-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: E1123 04:14:12.050063 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="sg-core" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050078 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="sg-core" Nov 23 04:14:12 crc kubenswrapper[4751]: E1123 04:14:12.050098 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-notification-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-notification-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050451 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="proxy-httpd" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050484 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-notification-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050507 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="sg-core" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.050533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="968acafd-d3ef-457c-853a-4dec707be397" containerName="ceilometer-central-agent" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.055220 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.057882 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.067285 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.067461 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.067655 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.149223 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.150711 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.170518 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.170848 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.170980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.171090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpw4q\" (UniqueName: \"kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.171202 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.171328 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.171527 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.171639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.176882 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273856 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273889 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273914 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.273960 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zlw\" (UniqueName: \"kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274028 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpw4q\" (UniqueName: \"kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274046 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274076 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274095 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274121 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274154 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.274182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.275252 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.276997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.280129 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.280544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.281205 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.282162 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.286105 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.292581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpw4q\" (UniqueName: \"kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q\") pod \"ceilometer-0\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zlw\" (UniqueName: \"kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376438 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.376490 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.377187 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.377234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.377445 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.377472 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.378108 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.382680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.395329 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zlw\" (UniqueName: \"kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw\") pod \"dnsmasq-dns-cd5cbd7b9-bs4bv\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.490848 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.664465 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968acafd-d3ef-457c-853a-4dec707be397" path="/var/lib/kubelet/pods/968acafd-d3ef-457c-853a-4dec707be397/volumes" Nov 23 04:14:12 crc kubenswrapper[4751]: W1123 04:14:12.850741 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38ebaaa_b827_4d5d_9257_192a4d10b148.slice/crio-f31def55cd65c3df9dc7647db1c4b4ed84db659c023062c33c6044bfae56f367 WatchSource:0}: Error finding container f31def55cd65c3df9dc7647db1c4b4ed84db659c023062c33c6044bfae56f367: Status 404 returned error can't find the container with id f31def55cd65c3df9dc7647db1c4b4ed84db659c023062c33c6044bfae56f367 Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.851709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.916702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerStarted","Data":"f31def55cd65c3df9dc7647db1c4b4ed84db659c023062c33c6044bfae56f367"} Nov 23 04:14:12 crc kubenswrapper[4751]: I1123 04:14:12.972310 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:14:12 crc kubenswrapper[4751]: W1123 04:14:12.972447 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5269f0_23c7_4465_a21d_9f5946acbbfc.slice/crio-d58f9fc6adaf8b0b04778c07972244b006cacd657848585a755e916c9c55a4c9 WatchSource:0}: Error finding container d58f9fc6adaf8b0b04778c07972244b006cacd657848585a755e916c9c55a4c9: Status 404 returned error can't find the container with id d58f9fc6adaf8b0b04778c07972244b006cacd657848585a755e916c9c55a4c9 Nov 23 04:14:13 crc kubenswrapper[4751]: I1123 04:14:13.932083 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerID="1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e" exitCode=0 Nov 23 04:14:13 crc kubenswrapper[4751]: I1123 04:14:13.932153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" event={"ID":"ac5269f0-23c7-4465-a21d-9f5946acbbfc","Type":"ContainerDied","Data":"1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e"} Nov 23 04:14:13 crc kubenswrapper[4751]: I1123 04:14:13.932786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" event={"ID":"ac5269f0-23c7-4465-a21d-9f5946acbbfc","Type":"ContainerStarted","Data":"d58f9fc6adaf8b0b04778c07972244b006cacd657848585a755e916c9c55a4c9"} Nov 23 04:14:13 crc kubenswrapper[4751]: I1123 04:14:13.940276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerStarted","Data":"3c855d24cfecdc57495f61b7ff4cce891242e45993cbeba50fc24001a1ad8050"} Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.427235 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.540239 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.625919 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.951907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerStarted","Data":"a77c1724579f1fbb0011c8a37528ed0449060ac2fe36a3da58eaef609edad9af"} Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.952265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerStarted","Data":"5c0428fc2ae2bac83efd9c04f18f2fb6f0e7a925858d3602be47c53440ddf21b"} Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.954798 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-log" containerID="cri-o://e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08" gracePeriod=30 Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.955798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" event={"ID":"ac5269f0-23c7-4465-a21d-9f5946acbbfc","Type":"ContainerStarted","Data":"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d"} Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.955829 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.956086 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-api" containerID="cri-o://ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d" gracePeriod=30 Nov 23 04:14:14 crc kubenswrapper[4751]: I1123 04:14:14.984432 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" podStartSLOduration=2.984414605 podStartE2EDuration="2.984414605s" podCreationTimestamp="2025-11-23 04:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:14.97890893 +0000 UTC m=+1151.172580289" watchObservedRunningTime="2025-11-23 04:14:14.984414605 +0000 UTC m=+1151.178085964" Nov 23 04:14:15 crc kubenswrapper[4751]: I1123 04:14:15.964966 4751 generic.go:334] "Generic (PLEG): container finished" podID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerID="e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08" exitCode=143 Nov 23 04:14:15 crc kubenswrapper[4751]: I1123 04:14:15.965068 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerDied","Data":"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08"} Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.533413 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.976564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerStarted","Data":"b36c4a4e47ded201e55ab424f3a17c59aa6e889662823a87ad612582acc45163"} Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.976737 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-central-agent" containerID="cri-o://3c855d24cfecdc57495f61b7ff4cce891242e45993cbeba50fc24001a1ad8050" gracePeriod=30 Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.976817 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.977130 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="proxy-httpd" containerID="cri-o://b36c4a4e47ded201e55ab424f3a17c59aa6e889662823a87ad612582acc45163" gracePeriod=30 Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.977180 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="sg-core" containerID="cri-o://a77c1724579f1fbb0011c8a37528ed0449060ac2fe36a3da58eaef609edad9af" gracePeriod=30 Nov 23 04:14:16 crc kubenswrapper[4751]: I1123 04:14:16.977219 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-notification-agent" containerID="cri-o://5c0428fc2ae2bac83efd9c04f18f2fb6f0e7a925858d3602be47c53440ddf21b" gracePeriod=30 Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.011452 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.818142864 podStartE2EDuration="6.011426475s" podCreationTimestamp="2025-11-23 04:14:11 +0000 UTC" firstStartedPulling="2025-11-23 04:14:12.853417832 +0000 UTC m=+1149.047089191" lastFinishedPulling="2025-11-23 04:14:16.046701443 +0000 UTC m=+1152.240372802" observedRunningTime="2025-11-23 04:14:17.004692138 +0000 UTC m=+1153.198363497" watchObservedRunningTime="2025-11-23 04:14:17.011426475 +0000 UTC m=+1153.205097854" Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992394 4751 generic.go:334] "Generic (PLEG): container finished" podID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerID="b36c4a4e47ded201e55ab424f3a17c59aa6e889662823a87ad612582acc45163" exitCode=0 Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992710 4751 generic.go:334] "Generic (PLEG): container finished" podID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerID="a77c1724579f1fbb0011c8a37528ed0449060ac2fe36a3da58eaef609edad9af" exitCode=2 Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992730 4751 generic.go:334] "Generic (PLEG): container finished" podID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerID="5c0428fc2ae2bac83efd9c04f18f2fb6f0e7a925858d3602be47c53440ddf21b" exitCode=0 Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerDied","Data":"b36c4a4e47ded201e55ab424f3a17c59aa6e889662823a87ad612582acc45163"} Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerDied","Data":"a77c1724579f1fbb0011c8a37528ed0449060ac2fe36a3da58eaef609edad9af"} Nov 23 04:14:17 crc kubenswrapper[4751]: I1123 04:14:17.992865 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerDied","Data":"5c0428fc2ae2bac83efd9c04f18f2fb6f0e7a925858d3602be47c53440ddf21b"} Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.625600 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.737911 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs9dk\" (UniqueName: \"kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk\") pod \"8246e263-9fb1-4b28-aff3-0f02e364748d\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.738030 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data\") pod \"8246e263-9fb1-4b28-aff3-0f02e364748d\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.738096 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle\") pod \"8246e263-9fb1-4b28-aff3-0f02e364748d\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.738126 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs\") pod \"8246e263-9fb1-4b28-aff3-0f02e364748d\" (UID: \"8246e263-9fb1-4b28-aff3-0f02e364748d\") " Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.739047 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs" (OuterVolumeSpecName: "logs") pod "8246e263-9fb1-4b28-aff3-0f02e364748d" (UID: "8246e263-9fb1-4b28-aff3-0f02e364748d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.746362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk" (OuterVolumeSpecName: "kube-api-access-xs9dk") pod "8246e263-9fb1-4b28-aff3-0f02e364748d" (UID: "8246e263-9fb1-4b28-aff3-0f02e364748d"). InnerVolumeSpecName "kube-api-access-xs9dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.776677 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8246e263-9fb1-4b28-aff3-0f02e364748d" (UID: "8246e263-9fb1-4b28-aff3-0f02e364748d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.781373 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data" (OuterVolumeSpecName: "config-data") pod "8246e263-9fb1-4b28-aff3-0f02e364748d" (UID: "8246e263-9fb1-4b28-aff3-0f02e364748d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.839863 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8246e263-9fb1-4b28-aff3-0f02e364748d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.839896 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs9dk\" (UniqueName: \"kubernetes.io/projected/8246e263-9fb1-4b28-aff3-0f02e364748d-kube-api-access-xs9dk\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.839907 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:18 crc kubenswrapper[4751]: I1123 04:14:18.839915 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8246e263-9fb1-4b28-aff3-0f02e364748d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.003108 4751 generic.go:334] "Generic (PLEG): container finished" podID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerID="ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d" exitCode=0 Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.003183 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.003174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerDied","Data":"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d"} Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.003248 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8246e263-9fb1-4b28-aff3-0f02e364748d","Type":"ContainerDied","Data":"82606bece04a5309821fdedcf20ac1223d87af4dcda18885f0b080e1fd5717d8"} Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.003271 4751 scope.go:117] "RemoveContainer" containerID="ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.022460 4751 scope.go:117] "RemoveContainer" containerID="e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.044013 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.071988 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.073613 4751 scope.go:117] "RemoveContainer" containerID="ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d" Nov 23 04:14:19 crc kubenswrapper[4751]: E1123 04:14:19.075685 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d\": container with ID starting with ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d not found: ID does not exist" containerID="ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.075810 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d"} err="failed to get container status \"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d\": rpc error: code = NotFound desc = could not find container \"ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d\": container with ID starting with ec74a5405ee91ccb0065c1d5e4d8c8b734c1ddad8b811b06d3963a9f0501da9d not found: ID does not exist" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.075920 4751 scope.go:117] "RemoveContainer" containerID="e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08" Nov 23 04:14:19 crc kubenswrapper[4751]: E1123 04:14:19.079850 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08\": container with ID starting with e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08 not found: ID does not exist" containerID="e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.079892 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08"} err="failed to get container status \"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08\": rpc error: code = NotFound desc = could not find container \"e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08\": container with ID starting with e37025c4495eb3bab350fe959c006e374a80f36771fd5408de4a3eb9dc8d7d08 not found: ID does not exist" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.080379 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:19 crc kubenswrapper[4751]: E1123 04:14:19.081020 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-log" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.081056 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-log" Nov 23 04:14:19 crc kubenswrapper[4751]: E1123 04:14:19.081113 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-api" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.081127 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-api" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.081542 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-api" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.081576 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" containerName="nova-api-log" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.083221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.086355 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.092131 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.095194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.095474 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.246952 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.247208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.247330 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.247470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25cs\" (UniqueName: \"kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.247616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.247723 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.348990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.349101 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25cs\" (UniqueName: \"kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.349212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.349298 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.349390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.349450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.351818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.354026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.358034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.359770 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.360400 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.368969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25cs\" (UniqueName: \"kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs\") pod \"nova-api-0\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.424147 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.540489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.565322 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:19 crc kubenswrapper[4751]: I1123 04:14:19.900815 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.014712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerStarted","Data":"3d221e3fb465cec67d3f0b356ed24992b31e34cd46be34bcb20cfa83038e27dc"} Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.021385 4751 generic.go:334] "Generic (PLEG): container finished" podID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerID="3c855d24cfecdc57495f61b7ff4cce891242e45993cbeba50fc24001a1ad8050" exitCode=0 Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.022292 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerDied","Data":"3c855d24cfecdc57495f61b7ff4cce891242e45993cbeba50fc24001a1ad8050"} Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.041421 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.200462 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wvmrc"] Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.202718 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.206118 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.206273 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.238118 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wvmrc"] Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.370629 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkjn\" (UniqueName: \"kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.370788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.370842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.371028 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.422829 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.472963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.473497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.473596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.473712 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkjn\" (UniqueName: \"kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.477158 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.479007 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.492947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.494130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkjn\" (UniqueName: \"kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn\") pod \"nova-cell1-cell-mapping-wvmrc\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.529674 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.574868 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpw4q\" (UniqueName: \"kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.574915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575038 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575107 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.575338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd\") pod \"c38ebaaa-b827-4d5d-9257-192a4d10b148\" (UID: \"c38ebaaa-b827-4d5d-9257-192a4d10b148\") " Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.576015 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.581948 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.584473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts" (OuterVolumeSpecName: "scripts") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.594601 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q" (OuterVolumeSpecName: "kube-api-access-bpw4q") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "kube-api-access-bpw4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.623467 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.639542 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678024 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678055 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678065 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678076 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c38ebaaa-b827-4d5d-9257-192a4d10b148-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678085 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpw4q\" (UniqueName: \"kubernetes.io/projected/c38ebaaa-b827-4d5d-9257-192a4d10b148-kube-api-access-bpw4q\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.678095 4751 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.685025 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8246e263-9fb1-4b28-aff3-0f02e364748d" path="/var/lib/kubelet/pods/8246e263-9fb1-4b28-aff3-0f02e364748d/volumes" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.703886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.711574 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data" (OuterVolumeSpecName: "config-data") pod "c38ebaaa-b827-4d5d-9257-192a4d10b148" (UID: "c38ebaaa-b827-4d5d-9257-192a4d10b148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.780787 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: I1123 04:14:20.780825 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38ebaaa-b827-4d5d-9257-192a4d10b148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:20 crc kubenswrapper[4751]: W1123 04:14:20.998001 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe51cb37_a922_44f6_a214_22c763cee34c.slice/crio-5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38 WatchSource:0}: Error finding container 5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38: Status 404 returned error can't find the container with id 5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38 Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.001871 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wvmrc"] Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.036482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerStarted","Data":"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2"} Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.036543 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerStarted","Data":"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e"} Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.046046 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c38ebaaa-b827-4d5d-9257-192a4d10b148","Type":"ContainerDied","Data":"f31def55cd65c3df9dc7647db1c4b4ed84db659c023062c33c6044bfae56f367"} Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.046081 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.046097 4751 scope.go:117] "RemoveContainer" containerID="b36c4a4e47ded201e55ab424f3a17c59aa6e889662823a87ad612582acc45163" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.048790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wvmrc" event={"ID":"fe51cb37-a922-44f6-a214-22c763cee34c","Type":"ContainerStarted","Data":"5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38"} Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.079700 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.07968196 podStartE2EDuration="2.07968196s" podCreationTimestamp="2025-11-23 04:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:21.056493118 +0000 UTC m=+1157.250164497" watchObservedRunningTime="2025-11-23 04:14:21.07968196 +0000 UTC m=+1157.273353319" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.104326 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.106952 4751 scope.go:117] "RemoveContainer" containerID="a77c1724579f1fbb0011c8a37528ed0449060ac2fe36a3da58eaef609edad9af" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.118720 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.135547 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:21 crc kubenswrapper[4751]: E1123 04:14:21.136033 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="proxy-httpd" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136056 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="proxy-httpd" Nov 23 04:14:21 crc kubenswrapper[4751]: E1123 04:14:21.136071 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="sg-core" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136079 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="sg-core" Nov 23 04:14:21 crc kubenswrapper[4751]: E1123 04:14:21.136101 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-notification-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136110 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-notification-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: E1123 04:14:21.136147 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-central-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136155 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-central-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136419 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="sg-core" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136442 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-notification-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136456 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="proxy-httpd" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.136466 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" containerName="ceilometer-central-agent" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.138861 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.142014 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.144864 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.144901 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.146958 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.148286 4751 scope.go:117] "RemoveContainer" containerID="5c0428fc2ae2bac83efd9c04f18f2fb6f0e7a925858d3602be47c53440ddf21b" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.183896 4751 scope.go:117] "RemoveContainer" containerID="3c855d24cfecdc57495f61b7ff4cce891242e45993cbeba50fc24001a1ad8050" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292593 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-config-data\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292806 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-scripts\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.292986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2g8\" (UniqueName: \"kubernetes.io/projected/5c883930-39a6-4aa2-8be9-08ddb0d187e8-kube-api-access-7k2g8\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.293322 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.395294 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.395546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2g8\" (UniqueName: \"kubernetes.io/projected/5c883930-39a6-4aa2-8be9-08ddb0d187e8-kube-api-access-7k2g8\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.395855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.395934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.395984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.396025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.396077 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-config-data\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.396176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-scripts\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.398220 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-log-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.398486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c883930-39a6-4aa2-8be9-08ddb0d187e8-run-httpd\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.402871 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.403951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.404189 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.410450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-scripts\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.411540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c883930-39a6-4aa2-8be9-08ddb0d187e8-config-data\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.415396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2g8\" (UniqueName: \"kubernetes.io/projected/5c883930-39a6-4aa2-8be9-08ddb0d187e8-kube-api-access-7k2g8\") pod \"ceilometer-0\" (UID: \"5c883930-39a6-4aa2-8be9-08ddb0d187e8\") " pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.458536 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 04:14:21 crc kubenswrapper[4751]: I1123 04:14:21.840245 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.062090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c883930-39a6-4aa2-8be9-08ddb0d187e8","Type":"ContainerStarted","Data":"df0523c87f417d63e6f8778acfa7e872febe2a3605fe72b83fa9c49d965a3aec"} Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.069523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wvmrc" event={"ID":"fe51cb37-a922-44f6-a214-22c763cee34c","Type":"ContainerStarted","Data":"78faa58eb794c18d8e9b698ba99599bca266c8c1e9ac12ee2e4ef93603740fad"} Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.092953 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wvmrc" podStartSLOduration=2.092933952 podStartE2EDuration="2.092933952s" podCreationTimestamp="2025-11-23 04:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:22.084561272 +0000 UTC m=+1158.278232631" watchObservedRunningTime="2025-11-23 04:14:22.092933952 +0000 UTC m=+1158.286605311" Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.493561 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.558947 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.559300 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="dnsmasq-dns" containerID="cri-o://e5b828d2434689aad86371b4d236350434d18b6f2987fd4086a6187cd6851426" gracePeriod=10 Nov 23 04:14:22 crc kubenswrapper[4751]: I1123 04:14:22.659910 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38ebaaa-b827-4d5d-9257-192a4d10b148" path="/var/lib/kubelet/pods/c38ebaaa-b827-4d5d-9257-192a4d10b148/volumes" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.082184 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c883930-39a6-4aa2-8be9-08ddb0d187e8","Type":"ContainerStarted","Data":"7c00709679655d775dfcee6f22b5460c56ff3af4ec01f2cb5bf13f3f9458e12e"} Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.085227 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerID="e5b828d2434689aad86371b4d236350434d18b6f2987fd4086a6187cd6851426" exitCode=0 Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.085290 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" event={"ID":"eb43151b-50c2-46d9-8c0d-b9fe0573aa31","Type":"ContainerDied","Data":"e5b828d2434689aad86371b4d236350434d18b6f2987fd4086a6187cd6851426"} Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.085329 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" event={"ID":"eb43151b-50c2-46d9-8c0d-b9fe0573aa31","Type":"ContainerDied","Data":"1eb306af82112f0926ed1d36b4882e14a8e40273059110dcdf779731c5e7b803"} Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.085382 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb306af82112f0926ed1d36b4882e14a8e40273059110dcdf779731c5e7b803" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.136185 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236099 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236329 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236399 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236559 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88szk\" (UniqueName: \"kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.236783 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0\") pod \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\" (UID: \"eb43151b-50c2-46d9-8c0d-b9fe0573aa31\") " Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.241495 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk" (OuterVolumeSpecName: "kube-api-access-88szk") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "kube-api-access-88szk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.293559 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.293934 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.294631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config" (OuterVolumeSpecName: "config") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.297550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.299485 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb43151b-50c2-46d9-8c0d-b9fe0573aa31" (UID: "eb43151b-50c2-46d9-8c0d-b9fe0573aa31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339065 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88szk\" (UniqueName: \"kubernetes.io/projected/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-kube-api-access-88szk\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339092 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339101 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339111 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339119 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:23 crc kubenswrapper[4751]: I1123 04:14:23.339126 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb43151b-50c2-46d9-8c0d-b9fe0573aa31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.096908 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.099123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c883930-39a6-4aa2-8be9-08ddb0d187e8","Type":"ContainerStarted","Data":"82e6741c34792f668b5091aee21b45382ae49d9fa2cbdd87f12e880a00e7d6e7"} Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.099168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c883930-39a6-4aa2-8be9-08ddb0d187e8","Type":"ContainerStarted","Data":"2aeb200872cd031a39084a9a538681b414b91c9663c05871c67fbe790354675c"} Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.138391 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.145891 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-6xqsc"] Nov 23 04:14:24 crc kubenswrapper[4751]: I1123 04:14:24.664208 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" path="/var/lib/kubelet/pods/eb43151b-50c2-46d9-8c0d-b9fe0573aa31/volumes" Nov 23 04:14:26 crc kubenswrapper[4751]: I1123 04:14:26.127536 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c883930-39a6-4aa2-8be9-08ddb0d187e8","Type":"ContainerStarted","Data":"0bafee2b74373b9457aa91c5a69d73875294e44e7935043f4ed4c2679213f358"} Nov 23 04:14:26 crc kubenswrapper[4751]: I1123 04:14:26.127785 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 04:14:26 crc kubenswrapper[4751]: I1123 04:14:26.164845 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.905696763 podStartE2EDuration="5.164782641s" podCreationTimestamp="2025-11-23 04:14:21 +0000 UTC" firstStartedPulling="2025-11-23 04:14:21.821097108 +0000 UTC m=+1158.014768467" lastFinishedPulling="2025-11-23 04:14:25.080182976 +0000 UTC m=+1161.273854345" observedRunningTime="2025-11-23 04:14:26.150640628 +0000 UTC m=+1162.344312007" watchObservedRunningTime="2025-11-23 04:14:26.164782641 +0000 UTC m=+1162.358454020" Nov 23 04:14:27 crc kubenswrapper[4751]: I1123 04:14:27.141515 4751 generic.go:334] "Generic (PLEG): container finished" podID="fe51cb37-a922-44f6-a214-22c763cee34c" containerID="78faa58eb794c18d8e9b698ba99599bca266c8c1e9ac12ee2e4ef93603740fad" exitCode=0 Nov 23 04:14:27 crc kubenswrapper[4751]: I1123 04:14:27.141603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wvmrc" event={"ID":"fe51cb37-a922-44f6-a214-22c763cee34c","Type":"ContainerDied","Data":"78faa58eb794c18d8e9b698ba99599bca266c8c1e9ac12ee2e4ef93603740fad"} Nov 23 04:14:27 crc kubenswrapper[4751]: I1123 04:14:27.780550 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-bccf8f775-6xqsc" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: i/o timeout" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.611757 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.740168 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data\") pod \"fe51cb37-a922-44f6-a214-22c763cee34c\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.740292 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle\") pod \"fe51cb37-a922-44f6-a214-22c763cee34c\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.740398 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts\") pod \"fe51cb37-a922-44f6-a214-22c763cee34c\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.740536 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tkjn\" (UniqueName: \"kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn\") pod \"fe51cb37-a922-44f6-a214-22c763cee34c\" (UID: \"fe51cb37-a922-44f6-a214-22c763cee34c\") " Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.748773 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts" (OuterVolumeSpecName: "scripts") pod "fe51cb37-a922-44f6-a214-22c763cee34c" (UID: "fe51cb37-a922-44f6-a214-22c763cee34c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.754564 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn" (OuterVolumeSpecName: "kube-api-access-9tkjn") pod "fe51cb37-a922-44f6-a214-22c763cee34c" (UID: "fe51cb37-a922-44f6-a214-22c763cee34c"). InnerVolumeSpecName "kube-api-access-9tkjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.779293 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data" (OuterVolumeSpecName: "config-data") pod "fe51cb37-a922-44f6-a214-22c763cee34c" (UID: "fe51cb37-a922-44f6-a214-22c763cee34c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.779938 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe51cb37-a922-44f6-a214-22c763cee34c" (UID: "fe51cb37-a922-44f6-a214-22c763cee34c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.842803 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.843018 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.843133 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe51cb37-a922-44f6-a214-22c763cee34c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:28 crc kubenswrapper[4751]: I1123 04:14:28.843219 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tkjn\" (UniqueName: \"kubernetes.io/projected/fe51cb37-a922-44f6-a214-22c763cee34c-kube-api-access-9tkjn\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.169216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wvmrc" event={"ID":"fe51cb37-a922-44f6-a214-22c763cee34c","Type":"ContainerDied","Data":"5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38"} Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.169835 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5904752ad19cae0ac90b262d23bfbe9b299e91f922e608c2d8b6ef3514e64d38" Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.170015 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wvmrc" Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.395711 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.395964 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerName="nova-scheduler-scheduler" containerID="cri-o://4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" gracePeriod=30 Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.419165 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.419688 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" containerID="cri-o://d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608" gracePeriod=30 Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.419721 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" containerID="cri-o://b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a" gracePeriod=30 Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.424911 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.424951 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:29 crc kubenswrapper[4751]: I1123 04:14:29.465951 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.179397 4751 generic.go:334] "Generic (PLEG): container finished" podID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerID="d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608" exitCode=143 Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.179591 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerDied","Data":"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608"} Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.180731 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-log" containerID="cri-o://eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e" gracePeriod=30 Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.180880 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-api" containerID="cri-o://61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2" gracePeriod=30 Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.189323 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Nov 23 04:14:30 crc kubenswrapper[4751]: I1123 04:14:30.190118 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": EOF" Nov 23 04:14:31 crc kubenswrapper[4751]: E1123 04:14:31.053728 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:14:31 crc kubenswrapper[4751]: E1123 04:14:31.055168 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:14:31 crc kubenswrapper[4751]: E1123 04:14:31.056984 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 04:14:31 crc kubenswrapper[4751]: E1123 04:14:31.057030 4751 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerName="nova-scheduler-scheduler" Nov 23 04:14:31 crc kubenswrapper[4751]: I1123 04:14:31.193816 4751 generic.go:334] "Generic (PLEG): container finished" podID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerID="eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e" exitCode=143 Nov 23 04:14:31 crc kubenswrapper[4751]: I1123 04:14:31.194593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerDied","Data":"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e"} Nov 23 04:14:32 crc kubenswrapper[4751]: I1123 04:14:32.567750 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:41312->10.217.0.191:8775: read: connection reset by peer" Nov 23 04:14:32 crc kubenswrapper[4751]: I1123 04:14:32.567839 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:41308->10.217.0.191:8775: read: connection reset by peer" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.082133 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.129921 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle\") pod \"97400baf-e157-4da0-a69f-fe052eb9ec7c\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.130030 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs\") pod \"97400baf-e157-4da0-a69f-fe052eb9ec7c\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.130152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjpnz\" (UniqueName: \"kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz\") pod \"97400baf-e157-4da0-a69f-fe052eb9ec7c\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.130240 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs\") pod \"97400baf-e157-4da0-a69f-fe052eb9ec7c\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.130277 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data\") pod \"97400baf-e157-4da0-a69f-fe052eb9ec7c\" (UID: \"97400baf-e157-4da0-a69f-fe052eb9ec7c\") " Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.137532 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs" (OuterVolumeSpecName: "logs") pod "97400baf-e157-4da0-a69f-fe052eb9ec7c" (UID: "97400baf-e157-4da0-a69f-fe052eb9ec7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.144253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz" (OuterVolumeSpecName: "kube-api-access-zjpnz") pod "97400baf-e157-4da0-a69f-fe052eb9ec7c" (UID: "97400baf-e157-4da0-a69f-fe052eb9ec7c"). InnerVolumeSpecName "kube-api-access-zjpnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.160501 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data" (OuterVolumeSpecName: "config-data") pod "97400baf-e157-4da0-a69f-fe052eb9ec7c" (UID: "97400baf-e157-4da0-a69f-fe052eb9ec7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.200513 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97400baf-e157-4da0-a69f-fe052eb9ec7c" (UID: "97400baf-e157-4da0-a69f-fe052eb9ec7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.209231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "97400baf-e157-4da0-a69f-fe052eb9ec7c" (UID: "97400baf-e157-4da0-a69f-fe052eb9ec7c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.226410 4751 generic.go:334] "Generic (PLEG): container finished" podID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerID="b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a" exitCode=0 Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.226515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerDied","Data":"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a"} Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.226589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97400baf-e157-4da0-a69f-fe052eb9ec7c","Type":"ContainerDied","Data":"fe3ddaec94c3c857837a3580905e432155d3ac68639b5c5f8609cd74a8b35e2e"} Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.226597 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.226616 4751 scope.go:117] "RemoveContainer" containerID="b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.233875 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.234004 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.234116 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97400baf-e157-4da0-a69f-fe052eb9ec7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.234223 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97400baf-e157-4da0-a69f-fe052eb9ec7c-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.234311 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjpnz\" (UniqueName: \"kubernetes.io/projected/97400baf-e157-4da0-a69f-fe052eb9ec7c-kube-api-access-zjpnz\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.291673 4751 scope.go:117] "RemoveContainer" containerID="d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.318099 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.330303 4751 scope.go:117] "RemoveContainer" containerID="b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.330836 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a\": container with ID starting with b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a not found: ID does not exist" containerID="b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.330882 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a"} err="failed to get container status \"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a\": rpc error: code = NotFound desc = could not find container \"b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a\": container with ID starting with b6a757941fc5810265f7e93a4be0adf76179507af0270d55662c36eef8fb242a not found: ID does not exist" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.330911 4751 scope.go:117] "RemoveContainer" containerID="d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.331159 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608\": container with ID starting with d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608 not found: ID does not exist" containerID="d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.331181 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608"} err="failed to get container status \"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608\": rpc error: code = NotFound desc = could not find container \"d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608\": container with ID starting with d08a3e19dc55ad63e223a4a9206e844d78f3405815f8828a2788f3fb7bfef608 not found: ID does not exist" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.331210 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.341536 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.341903 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.341920 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.341938 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.341945 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.341982 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe51cb37-a922-44f6-a214-22c763cee34c" containerName="nova-manage" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.341989 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe51cb37-a922-44f6-a214-22c763cee34c" containerName="nova-manage" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.341999 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="dnsmasq-dns" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342005 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="dnsmasq-dns" Nov 23 04:14:33 crc kubenswrapper[4751]: E1123 04:14:33.342021 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="init" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342027 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="init" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342195 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-log" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342204 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb43151b-50c2-46d9-8c0d-b9fe0573aa31" containerName="dnsmasq-dns" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342224 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" containerName="nova-metadata-metadata" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.342234 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe51cb37-a922-44f6-a214-22c763cee34c" containerName="nova-manage" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.343231 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.345648 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.373662 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.373935 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.437274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94dfg\" (UniqueName: \"kubernetes.io/projected/b98d09fb-41ac-4b07-8334-72a33cf11ba6-kube-api-access-94dfg\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.437391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.437509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b98d09fb-41ac-4b07-8334-72a33cf11ba6-logs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.437712 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-config-data\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.437851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.540162 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94dfg\" (UniqueName: \"kubernetes.io/projected/b98d09fb-41ac-4b07-8334-72a33cf11ba6-kube-api-access-94dfg\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.540570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.540605 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b98d09fb-41ac-4b07-8334-72a33cf11ba6-logs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.540674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-config-data\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.540735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.541581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b98d09fb-41ac-4b07-8334-72a33cf11ba6-logs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.544432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-config-data\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.544556 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.548542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98d09fb-41ac-4b07-8334-72a33cf11ba6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.561910 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94dfg\" (UniqueName: \"kubernetes.io/projected/b98d09fb-41ac-4b07-8334-72a33cf11ba6-kube-api-access-94dfg\") pod \"nova-metadata-0\" (UID: \"b98d09fb-41ac-4b07-8334-72a33cf11ba6\") " pod="openstack/nova-metadata-0" Nov 23 04:14:33 crc kubenswrapper[4751]: I1123 04:14:33.686321 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 04:14:34 crc kubenswrapper[4751]: I1123 04:14:34.154473 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 04:14:34 crc kubenswrapper[4751]: I1123 04:14:34.241320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b98d09fb-41ac-4b07-8334-72a33cf11ba6","Type":"ContainerStarted","Data":"ca78d35b6af11978977f0b0f799e44b9a15a3e4ddacce1483e90087ba39cfd4a"} Nov 23 04:14:34 crc kubenswrapper[4751]: I1123 04:14:34.655035 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97400baf-e157-4da0-a69f-fe052eb9ec7c" path="/var/lib/kubelet/pods/97400baf-e157-4da0-a69f-fe052eb9ec7c/volumes" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.020453 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.074430 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grhfn\" (UniqueName: \"kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn\") pod \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.074561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data\") pod \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.074675 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle\") pod \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\" (UID: \"9fec9886-9b0e-4458-a67f-afbe529fa2c3\") " Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.097293 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn" (OuterVolumeSpecName: "kube-api-access-grhfn") pod "9fec9886-9b0e-4458-a67f-afbe529fa2c3" (UID: "9fec9886-9b0e-4458-a67f-afbe529fa2c3"). InnerVolumeSpecName "kube-api-access-grhfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.125838 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fec9886-9b0e-4458-a67f-afbe529fa2c3" (UID: "9fec9886-9b0e-4458-a67f-afbe529fa2c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.127600 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data" (OuterVolumeSpecName: "config-data") pod "9fec9886-9b0e-4458-a67f-afbe529fa2c3" (UID: "9fec9886-9b0e-4458-a67f-afbe529fa2c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.176895 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grhfn\" (UniqueName: \"kubernetes.io/projected/9fec9886-9b0e-4458-a67f-afbe529fa2c3-kube-api-access-grhfn\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.176934 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.176950 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec9886-9b0e-4458-a67f-afbe529fa2c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.257274 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b98d09fb-41ac-4b07-8334-72a33cf11ba6","Type":"ContainerStarted","Data":"e79ecaa153d9fffb5cbc6b0c540abe2b2548b535e713f00446f14e0d2ab9b077"} Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.257386 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b98d09fb-41ac-4b07-8334-72a33cf11ba6","Type":"ContainerStarted","Data":"b9d794716452775e6bdc2f50e66029474739bd4a9bf61fe10fbc5b2ef8794dce"} Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.261331 4751 generic.go:334] "Generic (PLEG): container finished" podID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" exitCode=0 Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.261414 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fec9886-9b0e-4458-a67f-afbe529fa2c3","Type":"ContainerDied","Data":"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563"} Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.261450 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9fec9886-9b0e-4458-a67f-afbe529fa2c3","Type":"ContainerDied","Data":"f8ae5e8e6f653da1866049d257cf6f8a564484075c1c9116734664e4770bb134"} Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.261476 4751 scope.go:117] "RemoveContainer" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.261563 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.280721 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.280668401 podStartE2EDuration="2.280668401s" podCreationTimestamp="2025-11-23 04:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:35.27569837 +0000 UTC m=+1171.469369739" watchObservedRunningTime="2025-11-23 04:14:35.280668401 +0000 UTC m=+1171.474339770" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.316633 4751 scope.go:117] "RemoveContainer" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" Nov 23 04:14:35 crc kubenswrapper[4751]: E1123 04:14:35.318741 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563\": container with ID starting with 4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563 not found: ID does not exist" containerID="4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.318803 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563"} err="failed to get container status \"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563\": rpc error: code = NotFound desc = could not find container \"4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563\": container with ID starting with 4548055edd4bd04cad2e1beadbb439c5c09411c5e56c9a5a3b64cbf1c78c1563 not found: ID does not exist" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.330729 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.348762 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.359878 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:35 crc kubenswrapper[4751]: E1123 04:14:35.360447 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerName="nova-scheduler-scheduler" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.360471 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerName="nova-scheduler-scheduler" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.360757 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" containerName="nova-scheduler-scheduler" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.361832 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.364416 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.368913 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.482266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.482552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflkl\" (UniqueName: \"kubernetes.io/projected/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-kube-api-access-rflkl\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.482692 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-config-data\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.585057 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.585115 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rflkl\" (UniqueName: \"kubernetes.io/projected/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-kube-api-access-rflkl\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.585217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-config-data\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.590842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-config-data\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.591088 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.603220 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rflkl\" (UniqueName: \"kubernetes.io/projected/1d5c19e2-e749-4c94-b8ce-04b9a34b65ff-kube-api-access-rflkl\") pod \"nova-scheduler-0\" (UID: \"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff\") " pod="openstack/nova-scheduler-0" Nov 23 04:14:35 crc kubenswrapper[4751]: I1123 04:14:35.679959 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.065601 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.191461 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205129 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205277 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25cs\" (UniqueName: \"kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205310 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205355 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.205431 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data\") pod \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\" (UID: \"3bd88601-50c8-4c55-b3d3-7c6df124d5e5\") " Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.207026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs" (OuterVolumeSpecName: "logs") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.208734 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs" (OuterVolumeSpecName: "kube-api-access-p25cs") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "kube-api-access-p25cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.231075 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data" (OuterVolumeSpecName: "config-data") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.243642 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.262499 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.275402 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff","Type":"ContainerStarted","Data":"1265b2eadc5fe8495b876f87d1813db106a29a143d6a623f6aaf8a79bff694d9"} Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.278834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3bd88601-50c8-4c55-b3d3-7c6df124d5e5" (UID: "3bd88601-50c8-4c55-b3d3-7c6df124d5e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.281817 4751 generic.go:334] "Generic (PLEG): container finished" podID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerID="61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2" exitCode=0 Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.281871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerDied","Data":"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2"} Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.281922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3bd88601-50c8-4c55-b3d3-7c6df124d5e5","Type":"ContainerDied","Data":"3d221e3fb465cec67d3f0b356ed24992b31e34cd46be34bcb20cfa83038e27dc"} Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.281924 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.281940 4751 scope.go:117] "RemoveContainer" containerID="61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307118 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25cs\" (UniqueName: \"kubernetes.io/projected/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-kube-api-access-p25cs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307331 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307357 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-logs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307368 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307377 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.307385 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bd88601-50c8-4c55-b3d3-7c6df124d5e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.326167 4751 scope.go:117] "RemoveContainer" containerID="eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.339971 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.352755 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362231 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:36 crc kubenswrapper[4751]: E1123 04:14:36.362746 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-log" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362764 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-log" Nov 23 04:14:36 crc kubenswrapper[4751]: E1123 04:14:36.362790 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-api" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362796 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-api" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362865 4751 scope.go:117] "RemoveContainer" containerID="61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362959 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-api" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.362988 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" containerName="nova-api-log" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.364994 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: E1123 04:14:36.373320 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2\": container with ID starting with 61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2 not found: ID does not exist" containerID="61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.373498 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.373515 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2"} err="failed to get container status \"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2\": rpc error: code = NotFound desc = could not find container \"61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2\": container with ID starting with 61884f81173c9c6683f3c31dbf2435c46dc8cffd68a856fec3645458f3edc0c2 not found: ID does not exist" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.373569 4751 scope.go:117] "RemoveContainer" containerID="eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.373789 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.374139 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 04:14:36 crc kubenswrapper[4751]: E1123 04:14:36.375905 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e\": container with ID starting with eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e not found: ID does not exist" containerID="eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.375955 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e"} err="failed to get container status \"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e\": rpc error: code = NotFound desc = could not find container \"eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e\": container with ID starting with eebc1d91635b74ac5dfa72f667bf9cc4fff0047cfebb7b1b3d58d83e2b3d0a7e not found: ID does not exist" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.393919 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.409932 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.410037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4e65a-c2e7-4040-9230-063782c96cca-logs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.410104 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.410147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56d4k\" (UniqueName: \"kubernetes.io/projected/a2b4e65a-c2e7-4040-9230-063782c96cca-kube-api-access-56d4k\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.411243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.411316 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-config-data\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513085 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56d4k\" (UniqueName: \"kubernetes.io/projected/a2b4e65a-c2e7-4040-9230-063782c96cca-kube-api-access-56d4k\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513251 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-config-data\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513373 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513432 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4e65a-c2e7-4040-9230-063782c96cca-logs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.513949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4e65a-c2e7-4040-9230-063782c96cca-logs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.518057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.518166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.518250 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-config-data\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.518631 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2b4e65a-c2e7-4040-9230-063782c96cca-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.535058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56d4k\" (UniqueName: \"kubernetes.io/projected/a2b4e65a-c2e7-4040-9230-063782c96cca-kube-api-access-56d4k\") pod \"nova-api-0\" (UID: \"a2b4e65a-c2e7-4040-9230-063782c96cca\") " pod="openstack/nova-api-0" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.658722 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd88601-50c8-4c55-b3d3-7c6df124d5e5" path="/var/lib/kubelet/pods/3bd88601-50c8-4c55-b3d3-7c6df124d5e5/volumes" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.660110 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fec9886-9b0e-4458-a67f-afbe529fa2c3" path="/var/lib/kubelet/pods/9fec9886-9b0e-4458-a67f-afbe529fa2c3/volumes" Nov 23 04:14:36 crc kubenswrapper[4751]: I1123 04:14:36.692330 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 04:14:37 crc kubenswrapper[4751]: I1123 04:14:37.254933 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 04:14:37 crc kubenswrapper[4751]: W1123 04:14:37.273193 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b4e65a_c2e7_4040_9230_063782c96cca.slice/crio-5757ecacee5723bbbecc8db1b2a5d5f2fcd2bdee5b5c366c2a1a3a8985cd1901 WatchSource:0}: Error finding container 5757ecacee5723bbbecc8db1b2a5d5f2fcd2bdee5b5c366c2a1a3a8985cd1901: Status 404 returned error can't find the container with id 5757ecacee5723bbbecc8db1b2a5d5f2fcd2bdee5b5c366c2a1a3a8985cd1901 Nov 23 04:14:37 crc kubenswrapper[4751]: I1123 04:14:37.302360 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4e65a-c2e7-4040-9230-063782c96cca","Type":"ContainerStarted","Data":"5757ecacee5723bbbecc8db1b2a5d5f2fcd2bdee5b5c366c2a1a3a8985cd1901"} Nov 23 04:14:37 crc kubenswrapper[4751]: I1123 04:14:37.317588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1d5c19e2-e749-4c94-b8ce-04b9a34b65ff","Type":"ContainerStarted","Data":"78f393388871eed7e203c09c607d182f7b304221c86a42609efd684c24f197a5"} Nov 23 04:14:37 crc kubenswrapper[4751]: I1123 04:14:37.346105 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.346089163 podStartE2EDuration="2.346089163s" podCreationTimestamp="2025-11-23 04:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:37.339688034 +0000 UTC m=+1173.533359383" watchObservedRunningTime="2025-11-23 04:14:37.346089163 +0000 UTC m=+1173.539760522" Nov 23 04:14:38 crc kubenswrapper[4751]: I1123 04:14:38.334293 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4e65a-c2e7-4040-9230-063782c96cca","Type":"ContainerStarted","Data":"924c5b1f7bfd01f32aa9c911a47426d647e398aed9bb205fb2f23095df61631d"} Nov 23 04:14:38 crc kubenswrapper[4751]: I1123 04:14:38.335538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4e65a-c2e7-4040-9230-063782c96cca","Type":"ContainerStarted","Data":"4540043b954e719890e632a7b07ce566b513bad483df0c25e22bf91df0876907"} Nov 23 04:14:38 crc kubenswrapper[4751]: I1123 04:14:38.687321 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 04:14:38 crc kubenswrapper[4751]: I1123 04:14:38.687395 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 04:14:39 crc kubenswrapper[4751]: I1123 04:14:39.390754 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.390721138 podStartE2EDuration="3.390721138s" podCreationTimestamp="2025-11-23 04:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:14:39.377619163 +0000 UTC m=+1175.571290562" watchObservedRunningTime="2025-11-23 04:14:39.390721138 +0000 UTC m=+1175.584392537" Nov 23 04:14:40 crc kubenswrapper[4751]: I1123 04:14:40.680558 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 04:14:43 crc kubenswrapper[4751]: I1123 04:14:43.687263 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 04:14:43 crc kubenswrapper[4751]: I1123 04:14:43.687855 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 04:14:44 crc kubenswrapper[4751]: I1123 04:14:44.703615 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b98d09fb-41ac-4b07-8334-72a33cf11ba6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:44 crc kubenswrapper[4751]: I1123 04:14:44.703692 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b98d09fb-41ac-4b07-8334-72a33cf11ba6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:45 crc kubenswrapper[4751]: I1123 04:14:45.680335 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 04:14:45 crc kubenswrapper[4751]: I1123 04:14:45.727909 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 04:14:46 crc kubenswrapper[4751]: I1123 04:14:46.487271 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 04:14:46 crc kubenswrapper[4751]: I1123 04:14:46.692984 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:46 crc kubenswrapper[4751]: I1123 04:14:46.693436 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 04:14:47 crc kubenswrapper[4751]: I1123 04:14:47.705542 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4e65a-c2e7-4040-9230-063782c96cca" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:47 crc kubenswrapper[4751]: I1123 04:14:47.705544 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4e65a-c2e7-4040-9230-063782c96cca" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 04:14:51 crc kubenswrapper[4751]: I1123 04:14:51.475883 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 04:14:53 crc kubenswrapper[4751]: I1123 04:14:53.693280 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 04:14:53 crc kubenswrapper[4751]: I1123 04:14:53.694051 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 04:14:53 crc kubenswrapper[4751]: I1123 04:14:53.700316 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 04:14:54 crc kubenswrapper[4751]: I1123 04:14:54.588530 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.704023 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.704604 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.705425 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.705495 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.717075 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 04:14:56 crc kubenswrapper[4751]: I1123 04:14:56.719079 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.149100 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt"] Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.150780 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.153240 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.154995 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.159668 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt"] Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.245178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.245252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.245887 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbp8\" (UniqueName: \"kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.347274 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.347803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.348131 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbp8\" (UniqueName: \"kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.348831 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.359283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.371058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbp8\" (UniqueName: \"kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8\") pod \"collect-profiles-29397855-mg6jt\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.475562 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:00 crc kubenswrapper[4751]: I1123 04:15:00.954919 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt"] Nov 23 04:15:00 crc kubenswrapper[4751]: W1123 04:15:00.960712 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c2aada_719a_44a4_b3d4_1db9b3ba2f5e.slice/crio-6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e WatchSource:0}: Error finding container 6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e: Status 404 returned error can't find the container with id 6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e Nov 23 04:15:01 crc kubenswrapper[4751]: I1123 04:15:01.674677 4751 generic.go:334] "Generic (PLEG): container finished" podID="14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" containerID="9bd3c43aa80b22bc934374be7d27fce1a151512459c50aec3fd60ebee1096df9" exitCode=0 Nov 23 04:15:01 crc kubenswrapper[4751]: I1123 04:15:01.674743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" event={"ID":"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e","Type":"ContainerDied","Data":"9bd3c43aa80b22bc934374be7d27fce1a151512459c50aec3fd60ebee1096df9"} Nov 23 04:15:01 crc kubenswrapper[4751]: I1123 04:15:01.675112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" event={"ID":"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e","Type":"ContainerStarted","Data":"6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e"} Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.109638 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.211192 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbp8\" (UniqueName: \"kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8\") pod \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.211407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume\") pod \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.211556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume\") pod \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\" (UID: \"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e\") " Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.212773 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" (UID: "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.217988 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8" (OuterVolumeSpecName: "kube-api-access-dkbp8") pod "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" (UID: "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e"). InnerVolumeSpecName "kube-api-access-dkbp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.218965 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" (UID: "14c2aada-719a-44a4-b3d4-1db9b3ba2f5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.314298 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.314646 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbp8\" (UniqueName: \"kubernetes.io/projected/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-kube-api-access-dkbp8\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.314659 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.703553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" event={"ID":"14c2aada-719a-44a4-b3d4-1db9b3ba2f5e","Type":"ContainerDied","Data":"6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e"} Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.703614 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca2d6218427a5fb2fccf0f6bcc6f64f8d6ff04e9506db04fd20d107e544fa8e" Nov 23 04:15:03 crc kubenswrapper[4751]: I1123 04:15:03.703691 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt" Nov 23 04:15:04 crc kubenswrapper[4751]: I1123 04:15:04.939871 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:05 crc kubenswrapper[4751]: I1123 04:15:05.778439 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:09 crc kubenswrapper[4751]: I1123 04:15:09.035936 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="rabbitmq" containerID="cri-o://e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace" gracePeriod=604796 Nov 23 04:15:09 crc kubenswrapper[4751]: I1123 04:15:09.873687 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="rabbitmq" containerID="cri-o://5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35" gracePeriod=604796 Nov 23 04:15:10 crc kubenswrapper[4751]: I1123 04:15:10.332182 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 23 04:15:10 crc kubenswrapper[4751]: I1123 04:15:10.598161 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.635815 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.750988 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751068 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751089 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jht2m\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751167 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751225 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751274 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751308 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751364 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.751463 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data\") pod \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\" (UID: \"85de7e79-bbdf-4a3c-83d1-5a3977844a72\") " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.752264 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.752500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.753507 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.754170 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.754192 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.754206 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.758802 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m" (OuterVolumeSpecName: "kube-api-access-jht2m") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "kube-api-access-jht2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.760064 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.761906 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.764340 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.769972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info" (OuterVolumeSpecName: "pod-info") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.806721 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data" (OuterVolumeSpecName: "config-data") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.838334 4751 generic.go:334] "Generic (PLEG): container finished" podID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerID="e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace" exitCode=0 Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.838401 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerDied","Data":"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace"} Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.838446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"85de7e79-bbdf-4a3c-83d1-5a3977844a72","Type":"ContainerDied","Data":"a0e210fd40de7d49f53ae43c23edc7946a3ba7dbe7b6ab340b0107754e656e65"} Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.838465 4751 scope.go:117] "RemoveContainer" containerID="e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.838611 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.852938 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf" (OuterVolumeSpecName: "server-conf") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855808 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855845 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jht2m\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-kube-api-access-jht2m\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855859 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/85de7e79-bbdf-4a3c-83d1-5a3977844a72-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855872 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/85de7e79-bbdf-4a3c-83d1-5a3977844a72-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855884 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/85de7e79-bbdf-4a3c-83d1-5a3977844a72-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855896 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.855930 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.891337 4751 scope.go:117] "RemoveContainer" containerID="c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.897453 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.910103 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "85de7e79-bbdf-4a3c-83d1-5a3977844a72" (UID: "85de7e79-bbdf-4a3c-83d1-5a3977844a72"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.912172 4751 scope.go:117] "RemoveContainer" containerID="e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace" Nov 23 04:15:15 crc kubenswrapper[4751]: E1123 04:15:15.912625 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace\": container with ID starting with e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace not found: ID does not exist" containerID="e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.912673 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace"} err="failed to get container status \"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace\": rpc error: code = NotFound desc = could not find container \"e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace\": container with ID starting with e539621a2c460179a5f258d6e5cc4fb0c19877e678e280e8f77eedb36d810ace not found: ID does not exist" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.912703 4751 scope.go:117] "RemoveContainer" containerID="c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9" Nov 23 04:15:15 crc kubenswrapper[4751]: E1123 04:15:15.913092 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9\": container with ID starting with c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9 not found: ID does not exist" containerID="c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.913122 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9"} err="failed to get container status \"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9\": rpc error: code = NotFound desc = could not find container \"c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9\": container with ID starting with c3c91058f5b80f2502a349795937924d1eef48bff8964f33e96a5c1ec4b87fe9 not found: ID does not exist" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.957809 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:15 crc kubenswrapper[4751]: I1123 04:15:15.957838 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/85de7e79-bbdf-4a3c-83d1-5a3977844a72-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.179745 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.218819 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.255509 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.255989 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.256940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.256967 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="setup-container" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.256975 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="setup-container" Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.257012 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" containerName="collect-profiles" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.257020 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" containerName="collect-profiles" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.257674 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.257712 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" containerName="collect-profiles" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.258782 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.262713 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.262737 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6hxm9" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.263196 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.263259 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.263526 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.263828 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.263872 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.267218 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376549 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376600 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcb1b3bf-2ace-42f9-845f-8b993051016b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376632 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2w4m\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-kube-api-access-f2w4m\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcb1b3bf-2ace-42f9-845f-8b993051016b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376693 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376731 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376807 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376893 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.376966 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.477947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.477994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.478011 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.478070 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.478091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcb1b3bf-2ace-42f9-845f-8b993051016b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.478752 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2w4m\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-kube-api-access-f2w4m\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479106 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcb1b3bf-2ace-42f9-845f-8b993051016b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479157 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479410 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479798 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.479971 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.482471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bcb1b3bf-2ace-42f9-845f-8b993051016b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.482838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bcb1b3bf-2ace-42f9-845f-8b993051016b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.483860 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.485258 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bcb1b3bf-2ace-42f9-845f-8b993051016b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.485529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.506717 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2w4m\" (UniqueName: \"kubernetes.io/projected/bcb1b3bf-2ace-42f9-845f-8b993051016b-kube-api-access-f2w4m\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.521310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"bcb1b3bf-2ace-42f9-845f-8b993051016b\") " pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.576369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.586746 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.664400 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85de7e79-bbdf-4a3c-83d1-5a3977844a72" path="/var/lib/kubelet/pods/85de7e79-bbdf-4a3c-83d1-5a3977844a72/volumes" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686587 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686623 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686646 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7bc6\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686944 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.686995 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.687013 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data\") pod \"3885484b-1988-4a56-9b08-7848d614be82\" (UID: \"3885484b-1988-4a56-9b08-7848d614be82\") " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.688960 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.698061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.698183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.699160 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info" (OuterVolumeSpecName: "pod-info") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.699720 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.700797 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6" (OuterVolumeSpecName: "kube-api-access-m7bc6") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "kube-api-access-m7bc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.713180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.713199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.732998 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data" (OuterVolumeSpecName: "config-data") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.754917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf" (OuterVolumeSpecName: "server-conf") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789079 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789112 4751 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789124 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789138 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789151 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3885484b-1988-4a56-9b08-7848d614be82-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789164 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789175 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3885484b-1988-4a56-9b08-7848d614be82-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789185 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7bc6\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-kube-api-access-m7bc6\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789198 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3885484b-1988-4a56-9b08-7848d614be82-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.789219 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.805663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3885484b-1988-4a56-9b08-7848d614be82" (UID: "3885484b-1988-4a56-9b08-7848d614be82"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.808398 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.851438 4751 generic.go:334] "Generic (PLEG): container finished" podID="3885484b-1988-4a56-9b08-7848d614be82" containerID="5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35" exitCode=0 Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.851483 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerDied","Data":"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35"} Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.851511 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3885484b-1988-4a56-9b08-7848d614be82","Type":"ContainerDied","Data":"a5ac60ed33bd695b8fbf0a196eebffa8e5e6e05098b8a1d7bdfe34181248d8dd"} Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.851533 4751 scope.go:117] "RemoveContainer" containerID="5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.851659 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.878435 4751 scope.go:117] "RemoveContainer" containerID="7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.890441 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3885484b-1988-4a56-9b08-7848d614be82-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.890472 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.911237 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.927834 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.928001 4751 scope.go:117] "RemoveContainer" containerID="5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35" Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.935556 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35\": container with ID starting with 5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35 not found: ID does not exist" containerID="5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.935603 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35"} err="failed to get container status \"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35\": rpc error: code = NotFound desc = could not find container \"5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35\": container with ID starting with 5cc8f9acd2df94117747242b8be1fa6da8b6c90d70b09195c5668b43ba6eae35 not found: ID does not exist" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.935626 4751 scope.go:117] "RemoveContainer" containerID="7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9" Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.936149 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9\": container with ID starting with 7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9 not found: ID does not exist" containerID="7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.936183 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9"} err="failed to get container status \"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9\": rpc error: code = NotFound desc = could not find container \"7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9\": container with ID starting with 7ed7655b9eebd038330e71a46f7c66dacc1be89ca616ef68f76afc03eaa59bb9 not found: ID does not exist" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.941444 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.941854 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.941871 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: E1123 04:15:16.941898 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="setup-container" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.941905 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="setup-container" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.942114 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3885484b-1988-4a56-9b08-7848d614be82" containerName="rabbitmq" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.943037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.944894 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945064 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945246 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945375 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pbjgn" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945531 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945687 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.945834 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.949720 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.991948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.991992 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992013 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992031 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gnl\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-kube-api-access-j7gnl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992078 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992434 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ebb2468-5894-4d38-ac88-10033af58026-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:16 crc kubenswrapper[4751]: I1123 04:15:16.992454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ebb2468-5894-4d38-ac88-10033af58026-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.055281 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094567 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ebb2468-5894-4d38-ac88-10033af58026-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ebb2468-5894-4d38-ac88-10033af58026-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094671 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094793 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gnl\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-kube-api-access-j7gnl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.094977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095028 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095254 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.095742 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.096013 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ebb2468-5894-4d38-ac88-10033af58026-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.097423 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.097727 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ebb2468-5894-4d38-ac88-10033af58026-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.098875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.100761 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.108015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ebb2468-5894-4d38-ac88-10033af58026-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.111855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ebb2468-5894-4d38-ac88-10033af58026-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.114213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gnl\" (UniqueName: \"kubernetes.io/projected/2ebb2468-5894-4d38-ac88-10033af58026-kube-api-access-j7gnl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.133264 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2ebb2468-5894-4d38-ac88-10033af58026\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.267696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.751736 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.884821 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ebb2468-5894-4d38-ac88-10033af58026","Type":"ContainerStarted","Data":"d3494fefffad753a1051f63e2fe2b5f124e403abd843b52d5e81899a6c94c804"} Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.886158 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcb1b3bf-2ace-42f9-845f-8b993051016b","Type":"ContainerStarted","Data":"eab1a689ef6c0d9f17580ca46fa5c499abb667576a7b8d3c45b94dc80c1c5f5f"} Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.956777 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.964795 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.968850 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 23 04:15:17 crc kubenswrapper[4751]: I1123 04:15:17.973665 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.016734 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017071 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8nd\" (UniqueName: \"kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017111 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.017186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118626 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118661 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118694 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8nd\" (UniqueName: \"kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.118832 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.119906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.119940 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.119941 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.120524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.120974 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.121378 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.146922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8nd\" (UniqueName: \"kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd\") pod \"dnsmasq-dns-d558885bc-r4v9g\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.288155 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.655101 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3885484b-1988-4a56-9b08-7848d614be82" path="/var/lib/kubelet/pods/3885484b-1988-4a56-9b08-7848d614be82/volumes" Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.853601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:18 crc kubenswrapper[4751]: I1123 04:15:18.898750 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcb1b3bf-2ace-42f9-845f-8b993051016b","Type":"ContainerStarted","Data":"712b2e8d1e20fe1e88160f85447a63bf1eea6c299558bbaba14968f4b2db2297"} Nov 23 04:15:19 crc kubenswrapper[4751]: I1123 04:15:19.908751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ebb2468-5894-4d38-ac88-10033af58026","Type":"ContainerStarted","Data":"a676afd11026616a68cbc62df4c55b45695c1b9dd71206d5d1c3047a1e899bb8"} Nov 23 04:15:19 crc kubenswrapper[4751]: I1123 04:15:19.911790 4751 generic.go:334] "Generic (PLEG): container finished" podID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerID="cd5076341dda2f82c088683885605fb2b2e8e204407f2ecd40b6815efe30033f" exitCode=0 Nov 23 04:15:19 crc kubenswrapper[4751]: I1123 04:15:19.912761 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" event={"ID":"8397006a-36ec-4b4f-a736-d43b0b45188a","Type":"ContainerDied","Data":"cd5076341dda2f82c088683885605fb2b2e8e204407f2ecd40b6815efe30033f"} Nov 23 04:15:19 crc kubenswrapper[4751]: I1123 04:15:19.912797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" event={"ID":"8397006a-36ec-4b4f-a736-d43b0b45188a","Type":"ContainerStarted","Data":"321d88bfe95fa604f328c74eaf8766a1325ea97c19c2cd8cebbbd7f60e7eba95"} Nov 23 04:15:20 crc kubenswrapper[4751]: I1123 04:15:20.925549 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" event={"ID":"8397006a-36ec-4b4f-a736-d43b0b45188a","Type":"ContainerStarted","Data":"aa9ae053398b18094484d8b1e956ccb8c32ca838e586ab372e145aa3cbd7bb25"} Nov 23 04:15:20 crc kubenswrapper[4751]: I1123 04:15:20.925896 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:20 crc kubenswrapper[4751]: I1123 04:15:20.955330 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" podStartSLOduration=3.9553067029999998 podStartE2EDuration="3.955306703s" podCreationTimestamp="2025-11-23 04:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:15:20.948261567 +0000 UTC m=+1217.141932966" watchObservedRunningTime="2025-11-23 04:15:20.955306703 +0000 UTC m=+1217.148978062" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.289590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.374213 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.374799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="dnsmasq-dns" containerID="cri-o://7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d" gracePeriod=10 Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.555257 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-gt9x6"] Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.557064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.570715 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-gt9x6"] Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.637873 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-config\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.637963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2zg\" (UniqueName: \"kubernetes.io/projected/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-kube-api-access-9r2zg\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.637997 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.638126 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.638204 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.638378 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.638402 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.739758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740213 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740337 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740434 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-config\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740548 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2zg\" (UniqueName: \"kubernetes.io/projected/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-kube-api-access-9r2zg\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.740649 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.741199 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.742610 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.742681 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.742986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-config\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.743580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.758975 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2zg\" (UniqueName: \"kubernetes.io/projected/d97d28a3-afb1-41a4-b623-ed9e4b89ca31-kube-api-access-9r2zg\") pod \"dnsmasq-dns-78c64bc9c5-gt9x6\" (UID: \"d97d28a3-afb1-41a4-b623-ed9e4b89ca31\") " pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.880039 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.903214 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.944846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.944933 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2zlw\" (UniqueName: \"kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.945047 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.945114 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.945147 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.945171 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb\") pod \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\" (UID: \"ac5269f0-23c7-4465-a21d-9f5946acbbfc\") " Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.951917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw" (OuterVolumeSpecName: "kube-api-access-p2zlw") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "kube-api-access-p2zlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:28 crc kubenswrapper[4751]: I1123 04:15:28.999755 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.002177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config" (OuterVolumeSpecName: "config") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.007819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.008957 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.019210 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac5269f0-23c7-4465-a21d-9f5946acbbfc" (UID: "ac5269f0-23c7-4465-a21d-9f5946acbbfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.020913 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerID="7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d" exitCode=0 Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.020968 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" event={"ID":"ac5269f0-23c7-4465-a21d-9f5946acbbfc","Type":"ContainerDied","Data":"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d"} Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.021000 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" event={"ID":"ac5269f0-23c7-4465-a21d-9f5946acbbfc","Type":"ContainerDied","Data":"d58f9fc6adaf8b0b04778c07972244b006cacd657848585a755e916c9c55a4c9"} Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.021025 4751 scope.go:117] "RemoveContainer" containerID="7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.021196 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-bs4bv" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047585 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047608 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2zlw\" (UniqueName: \"kubernetes.io/projected/ac5269f0-23c7-4465-a21d-9f5946acbbfc-kube-api-access-p2zlw\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047617 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047625 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047634 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.047643 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac5269f0-23c7-4465-a21d-9f5946acbbfc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.048797 4751 scope.go:117] "RemoveContainer" containerID="1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.079584 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.083789 4751 scope.go:117] "RemoveContainer" containerID="7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d" Nov 23 04:15:29 crc kubenswrapper[4751]: E1123 04:15:29.084490 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d\": container with ID starting with 7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d not found: ID does not exist" containerID="7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.084520 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d"} err="failed to get container status \"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d\": rpc error: code = NotFound desc = could not find container \"7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d\": container with ID starting with 7583b067fead9723d4921fe9eb7c3ebd232108d9368b1f4fe64a2656f88e484d not found: ID does not exist" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.084542 4751 scope.go:117] "RemoveContainer" containerID="1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e" Nov 23 04:15:29 crc kubenswrapper[4751]: E1123 04:15:29.084957 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e\": container with ID starting with 1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e not found: ID does not exist" containerID="1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.084983 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e"} err="failed to get container status \"1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e\": rpc error: code = NotFound desc = could not find container \"1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e\": container with ID starting with 1c35dadba06f2d48274c1d643d351f9aa01b839e0e3a357e2937c1ac13b69f3e not found: ID does not exist" Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.089886 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-bs4bv"] Nov 23 04:15:29 crc kubenswrapper[4751]: W1123 04:15:29.401134 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97d28a3_afb1_41a4_b623_ed9e4b89ca31.slice/crio-be94fc6381ff82e539837882715c30aa771f2f8036716310c5fba04dabeb53be WatchSource:0}: Error finding container be94fc6381ff82e539837882715c30aa771f2f8036716310c5fba04dabeb53be: Status 404 returned error can't find the container with id be94fc6381ff82e539837882715c30aa771f2f8036716310c5fba04dabeb53be Nov 23 04:15:29 crc kubenswrapper[4751]: I1123 04:15:29.403567 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-gt9x6"] Nov 23 04:15:30 crc kubenswrapper[4751]: I1123 04:15:30.032412 4751 generic.go:334] "Generic (PLEG): container finished" podID="d97d28a3-afb1-41a4-b623-ed9e4b89ca31" containerID="a2cb925d002cb772b72533d89eabb67bb42e3783a67790e3b0f11be4eda2f958" exitCode=0 Nov 23 04:15:30 crc kubenswrapper[4751]: I1123 04:15:30.032495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" event={"ID":"d97d28a3-afb1-41a4-b623-ed9e4b89ca31","Type":"ContainerDied","Data":"a2cb925d002cb772b72533d89eabb67bb42e3783a67790e3b0f11be4eda2f958"} Nov 23 04:15:30 crc kubenswrapper[4751]: I1123 04:15:30.032810 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" event={"ID":"d97d28a3-afb1-41a4-b623-ed9e4b89ca31","Type":"ContainerStarted","Data":"be94fc6381ff82e539837882715c30aa771f2f8036716310c5fba04dabeb53be"} Nov 23 04:15:30 crc kubenswrapper[4751]: I1123 04:15:30.663072 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" path="/var/lib/kubelet/pods/ac5269f0-23c7-4465-a21d-9f5946acbbfc/volumes" Nov 23 04:15:31 crc kubenswrapper[4751]: I1123 04:15:31.047822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" event={"ID":"d97d28a3-afb1-41a4-b623-ed9e4b89ca31","Type":"ContainerStarted","Data":"8ef00558ee88ca6f89aefd7871d93c82980b3c06fffb527c2bf3673898a945c2"} Nov 23 04:15:31 crc kubenswrapper[4751]: I1123 04:15:31.048062 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:31 crc kubenswrapper[4751]: I1123 04:15:31.083554 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" podStartSLOduration=3.08353382 podStartE2EDuration="3.08353382s" podCreationTimestamp="2025-11-23 04:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:15:31.081333452 +0000 UTC m=+1227.275004831" watchObservedRunningTime="2025-11-23 04:15:31.08353382 +0000 UTC m=+1227.277205179" Nov 23 04:15:38 crc kubenswrapper[4751]: I1123 04:15:38.882774 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-gt9x6" Nov 23 04:15:38 crc kubenswrapper[4751]: I1123 04:15:38.997702 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:38 crc kubenswrapper[4751]: I1123 04:15:38.998089 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="dnsmasq-dns" containerID="cri-o://aa9ae053398b18094484d8b1e956ccb8c32ca838e586ab372e145aa3cbd7bb25" gracePeriod=10 Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.162388 4751 generic.go:334] "Generic (PLEG): container finished" podID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerID="aa9ae053398b18094484d8b1e956ccb8c32ca838e586ab372e145aa3cbd7bb25" exitCode=0 Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.162981 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" event={"ID":"8397006a-36ec-4b4f-a736-d43b0b45188a","Type":"ContainerDied","Data":"aa9ae053398b18094484d8b1e956ccb8c32ca838e586ab372e145aa3cbd7bb25"} Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.447077 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473525 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473619 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473643 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473737 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k8nd\" (UniqueName: \"kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.473843 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb\") pod \"8397006a-36ec-4b4f-a736-d43b0b45188a\" (UID: \"8397006a-36ec-4b4f-a736-d43b0b45188a\") " Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.480754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd" (OuterVolumeSpecName: "kube-api-access-5k8nd") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "kube-api-access-5k8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.542621 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.549848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.568830 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576054 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576423 4751 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576464 4751 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576476 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576484 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.576493 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k8nd\" (UniqueName: \"kubernetes.io/projected/8397006a-36ec-4b4f-a736-d43b0b45188a-kube-api-access-5k8nd\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.582759 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config" (OuterVolumeSpecName: "config") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.585886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8397006a-36ec-4b4f-a736-d43b0b45188a" (UID: "8397006a-36ec-4b4f-a736-d43b0b45188a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.678400 4751 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:39 crc kubenswrapper[4751]: I1123 04:15:39.678712 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8397006a-36ec-4b4f-a736-d43b0b45188a-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.177238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" event={"ID":"8397006a-36ec-4b4f-a736-d43b0b45188a","Type":"ContainerDied","Data":"321d88bfe95fa604f328c74eaf8766a1325ea97c19c2cd8cebbbd7f60e7eba95"} Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.177296 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r4v9g" Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.177708 4751 scope.go:117] "RemoveContainer" containerID="aa9ae053398b18094484d8b1e956ccb8c32ca838e586ab372e145aa3cbd7bb25" Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.201248 4751 scope.go:117] "RemoveContainer" containerID="cd5076341dda2f82c088683885605fb2b2e8e204407f2ecd40b6815efe30033f" Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.222681 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.237281 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r4v9g"] Nov 23 04:15:40 crc kubenswrapper[4751]: I1123 04:15:40.659694 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" path="/var/lib/kubelet/pods/8397006a-36ec-4b4f-a736-d43b0b45188a/volumes" Nov 23 04:15:51 crc kubenswrapper[4751]: I1123 04:15:51.318848 4751 generic.go:334] "Generic (PLEG): container finished" podID="bcb1b3bf-2ace-42f9-845f-8b993051016b" containerID="712b2e8d1e20fe1e88160f85447a63bf1eea6c299558bbaba14968f4b2db2297" exitCode=0 Nov 23 04:15:51 crc kubenswrapper[4751]: I1123 04:15:51.318952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcb1b3bf-2ace-42f9-845f-8b993051016b","Type":"ContainerDied","Data":"712b2e8d1e20fe1e88160f85447a63bf1eea6c299558bbaba14968f4b2db2297"} Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.330820 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bcb1b3bf-2ace-42f9-845f-8b993051016b","Type":"ContainerStarted","Data":"f4e69b190839d810ed266801c4b1133171c03e7df68d81c983d131ac9aa8c059"} Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.331073 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.358166 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.358146039 podStartE2EDuration="36.358146039s" podCreationTimestamp="2025-11-23 04:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:15:52.353803575 +0000 UTC m=+1248.547474974" watchObservedRunningTime="2025-11-23 04:15:52.358146039 +0000 UTC m=+1248.551817398" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.417711 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6"] Nov 23 04:15:52 crc kubenswrapper[4751]: E1123 04:15:52.418095 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="init" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418112 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="init" Nov 23 04:15:52 crc kubenswrapper[4751]: E1123 04:15:52.418137 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418146 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: E1123 04:15:52.418163 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="init" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418170 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="init" Nov 23 04:15:52 crc kubenswrapper[4751]: E1123 04:15:52.418187 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418194 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418429 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8397006a-36ec-4b4f-a736-d43b0b45188a" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.418454 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5269f0-23c7-4465-a21d-9f5946acbbfc" containerName="dnsmasq-dns" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.419193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.423047 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.423073 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.423252 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.423421 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.434137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6"] Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.537157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prpf2\" (UniqueName: \"kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.537246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.537330 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.537386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.639527 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.639746 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.639833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.639923 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prpf2\" (UniqueName: \"kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.644434 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.644854 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.657452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prpf2\" (UniqueName: \"kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.657962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:52 crc kubenswrapper[4751]: I1123 04:15:52.740917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:15:53 crc kubenswrapper[4751]: I1123 04:15:53.316520 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6"] Nov 23 04:15:53 crc kubenswrapper[4751]: W1123 04:15:53.322715 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod381c6054_1b64_48db_81d6_12e6a95dcbe2.slice/crio-15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d WatchSource:0}: Error finding container 15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d: Status 404 returned error can't find the container with id 15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d Nov 23 04:15:53 crc kubenswrapper[4751]: I1123 04:15:53.328045 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:15:53 crc kubenswrapper[4751]: I1123 04:15:53.344106 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ebb2468-5894-4d38-ac88-10033af58026" containerID="a676afd11026616a68cbc62df4c55b45695c1b9dd71206d5d1c3047a1e899bb8" exitCode=0 Nov 23 04:15:53 crc kubenswrapper[4751]: I1123 04:15:53.344224 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ebb2468-5894-4d38-ac88-10033af58026","Type":"ContainerDied","Data":"a676afd11026616a68cbc62df4c55b45695c1b9dd71206d5d1c3047a1e899bb8"} Nov 23 04:15:53 crc kubenswrapper[4751]: I1123 04:15:53.345552 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" event={"ID":"381c6054-1b64-48db-81d6-12e6a95dcbe2","Type":"ContainerStarted","Data":"15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d"} Nov 23 04:15:54 crc kubenswrapper[4751]: I1123 04:15:54.358254 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2ebb2468-5894-4d38-ac88-10033af58026","Type":"ContainerStarted","Data":"3c20309e89dc31b99b7a5b27e182535d78ce5debc6acbd46024331e996a19bb9"} Nov 23 04:15:54 crc kubenswrapper[4751]: I1123 04:15:54.358860 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:15:54 crc kubenswrapper[4751]: I1123 04:15:54.383788 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.383773249 podStartE2EDuration="38.383773249s" podCreationTimestamp="2025-11-23 04:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:15:54.381470938 +0000 UTC m=+1250.575142297" watchObservedRunningTime="2025-11-23 04:15:54.383773249 +0000 UTC m=+1250.577444608" Nov 23 04:16:02 crc kubenswrapper[4751]: I1123 04:16:02.442326 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" event={"ID":"381c6054-1b64-48db-81d6-12e6a95dcbe2","Type":"ContainerStarted","Data":"043ea6ff5a8c2ebc4ec9b7471c2dce7457c81cb237440664774f11e181ca1e87"} Nov 23 04:16:02 crc kubenswrapper[4751]: I1123 04:16:02.473206 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" podStartSLOduration=1.8433964760000001 podStartE2EDuration="10.473184157s" podCreationTimestamp="2025-11-23 04:15:52 +0000 UTC" firstStartedPulling="2025-11-23 04:15:53.327839499 +0000 UTC m=+1249.521510858" lastFinishedPulling="2025-11-23 04:16:01.95762718 +0000 UTC m=+1258.151298539" observedRunningTime="2025-11-23 04:16:02.463147483 +0000 UTC m=+1258.656818852" watchObservedRunningTime="2025-11-23 04:16:02.473184157 +0000 UTC m=+1258.666855536" Nov 23 04:16:06 crc kubenswrapper[4751]: I1123 04:16:06.582835 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 04:16:07 crc kubenswrapper[4751]: I1123 04:16:07.271542 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 04:16:08 crc kubenswrapper[4751]: I1123 04:16:08.115094 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:16:08 crc kubenswrapper[4751]: I1123 04:16:08.115165 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:16:14 crc kubenswrapper[4751]: I1123 04:16:14.587852 4751 generic.go:334] "Generic (PLEG): container finished" podID="381c6054-1b64-48db-81d6-12e6a95dcbe2" containerID="043ea6ff5a8c2ebc4ec9b7471c2dce7457c81cb237440664774f11e181ca1e87" exitCode=0 Nov 23 04:16:14 crc kubenswrapper[4751]: I1123 04:16:14.587952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" event={"ID":"381c6054-1b64-48db-81d6-12e6a95dcbe2","Type":"ContainerDied","Data":"043ea6ff5a8c2ebc4ec9b7471c2dce7457c81cb237440664774f11e181ca1e87"} Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.136551 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.245382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prpf2\" (UniqueName: \"kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2\") pod \"381c6054-1b64-48db-81d6-12e6a95dcbe2\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.245500 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory\") pod \"381c6054-1b64-48db-81d6-12e6a95dcbe2\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.245530 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle\") pod \"381c6054-1b64-48db-81d6-12e6a95dcbe2\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.245550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key\") pod \"381c6054-1b64-48db-81d6-12e6a95dcbe2\" (UID: \"381c6054-1b64-48db-81d6-12e6a95dcbe2\") " Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.252728 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "381c6054-1b64-48db-81d6-12e6a95dcbe2" (UID: "381c6054-1b64-48db-81d6-12e6a95dcbe2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.254400 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2" (OuterVolumeSpecName: "kube-api-access-prpf2") pod "381c6054-1b64-48db-81d6-12e6a95dcbe2" (UID: "381c6054-1b64-48db-81d6-12e6a95dcbe2"). InnerVolumeSpecName "kube-api-access-prpf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.284313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "381c6054-1b64-48db-81d6-12e6a95dcbe2" (UID: "381c6054-1b64-48db-81d6-12e6a95dcbe2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.296981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory" (OuterVolumeSpecName: "inventory") pod "381c6054-1b64-48db-81d6-12e6a95dcbe2" (UID: "381c6054-1b64-48db-81d6-12e6a95dcbe2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.347034 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.347065 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.347076 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/381c6054-1b64-48db-81d6-12e6a95dcbe2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.347087 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prpf2\" (UniqueName: \"kubernetes.io/projected/381c6054-1b64-48db-81d6-12e6a95dcbe2-kube-api-access-prpf2\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.612590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" event={"ID":"381c6054-1b64-48db-81d6-12e6a95dcbe2","Type":"ContainerDied","Data":"15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d"} Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.612633 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a327a29a8babcd89fa1fce4044f6f32e0d232293e075f1fa47b80555462e6d" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.612676 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.717885 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h"] Nov 23 04:16:16 crc kubenswrapper[4751]: E1123 04:16:16.718520 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="381c6054-1b64-48db-81d6-12e6a95dcbe2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.718553 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="381c6054-1b64-48db-81d6-12e6a95dcbe2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.718916 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="381c6054-1b64-48db-81d6-12e6a95dcbe2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.719872 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.729893 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h"] Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.734857 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.735230 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.735272 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.736667 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.857863 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5cc\" (UniqueName: \"kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.858072 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.858100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.959934 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.959988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.960107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5cc\" (UniqueName: \"kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.965632 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.966039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:16 crc kubenswrapper[4751]: I1123 04:16:16.984300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5cc\" (UniqueName: \"kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7c89h\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:17 crc kubenswrapper[4751]: I1123 04:16:17.045123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:17 crc kubenswrapper[4751]: I1123 04:16:17.588488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h"] Nov 23 04:16:17 crc kubenswrapper[4751]: W1123 04:16:17.591632 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffccd27d_7f9b_49be_9f33_078fc7cdfe25.slice/crio-8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490 WatchSource:0}: Error finding container 8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490: Status 404 returned error can't find the container with id 8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490 Nov 23 04:16:17 crc kubenswrapper[4751]: I1123 04:16:17.626172 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" event={"ID":"ffccd27d-7f9b-49be-9f33-078fc7cdfe25","Type":"ContainerStarted","Data":"8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490"} Nov 23 04:16:18 crc kubenswrapper[4751]: I1123 04:16:18.639735 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" event={"ID":"ffccd27d-7f9b-49be-9f33-078fc7cdfe25","Type":"ContainerStarted","Data":"748ce9f8b7178e7254c642a0b14780e0cfcf8db64bde6e26f18a9e4981452683"} Nov 23 04:16:18 crc kubenswrapper[4751]: I1123 04:16:18.656901 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" podStartSLOduration=1.977063123 podStartE2EDuration="2.656884335s" podCreationTimestamp="2025-11-23 04:16:16 +0000 UTC" firstStartedPulling="2025-11-23 04:16:17.595600914 +0000 UTC m=+1273.789272283" lastFinishedPulling="2025-11-23 04:16:18.275422106 +0000 UTC m=+1274.469093495" observedRunningTime="2025-11-23 04:16:18.656846754 +0000 UTC m=+1274.850518113" watchObservedRunningTime="2025-11-23 04:16:18.656884335 +0000 UTC m=+1274.850555694" Nov 23 04:16:21 crc kubenswrapper[4751]: I1123 04:16:21.672216 4751 generic.go:334] "Generic (PLEG): container finished" podID="ffccd27d-7f9b-49be-9f33-078fc7cdfe25" containerID="748ce9f8b7178e7254c642a0b14780e0cfcf8db64bde6e26f18a9e4981452683" exitCode=0 Nov 23 04:16:21 crc kubenswrapper[4751]: I1123 04:16:21.672438 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" event={"ID":"ffccd27d-7f9b-49be-9f33-078fc7cdfe25","Type":"ContainerDied","Data":"748ce9f8b7178e7254c642a0b14780e0cfcf8db64bde6e26f18a9e4981452683"} Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.170930 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.297833 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory\") pod \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.298033 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5cc\" (UniqueName: \"kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc\") pod \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.298177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key\") pod \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\" (UID: \"ffccd27d-7f9b-49be-9f33-078fc7cdfe25\") " Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.305415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc" (OuterVolumeSpecName: "kube-api-access-4s5cc") pod "ffccd27d-7f9b-49be-9f33-078fc7cdfe25" (UID: "ffccd27d-7f9b-49be-9f33-078fc7cdfe25"). InnerVolumeSpecName "kube-api-access-4s5cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.328749 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory" (OuterVolumeSpecName: "inventory") pod "ffccd27d-7f9b-49be-9f33-078fc7cdfe25" (UID: "ffccd27d-7f9b-49be-9f33-078fc7cdfe25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.349188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ffccd27d-7f9b-49be-9f33-078fc7cdfe25" (UID: "ffccd27d-7f9b-49be-9f33-078fc7cdfe25"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.400669 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5cc\" (UniqueName: \"kubernetes.io/projected/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-kube-api-access-4s5cc\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.400976 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.401036 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffccd27d-7f9b-49be-9f33-078fc7cdfe25-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.697300 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" event={"ID":"ffccd27d-7f9b-49be-9f33-078fc7cdfe25","Type":"ContainerDied","Data":"8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490"} Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.697360 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4d9494103f536ce75a83484b479e778ff9145e950edac4801965a1dbe14490" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.697622 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7c89h" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.817961 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl"] Nov 23 04:16:23 crc kubenswrapper[4751]: E1123 04:16:23.818338 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffccd27d-7f9b-49be-9f33-078fc7cdfe25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.818377 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffccd27d-7f9b-49be-9f33-078fc7cdfe25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.818588 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffccd27d-7f9b-49be-9f33-078fc7cdfe25" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.819221 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.821043 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.821791 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.822106 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.823197 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.844033 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl"] Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.910694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.910793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpmkr\" (UniqueName: \"kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.911052 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:23 crc kubenswrapper[4751]: I1123 04:16:23.911170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.013543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.013701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.013886 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpmkr\" (UniqueName: \"kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.013998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.020224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.022283 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.022949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.042719 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpmkr\" (UniqueName: \"kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.139208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.470917 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl"] Nov 23 04:16:24 crc kubenswrapper[4751]: W1123 04:16:24.473147 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac73cf10_7aa5_4958_9238_d5473d368ceb.slice/crio-d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634 WatchSource:0}: Error finding container d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634: Status 404 returned error can't find the container with id d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634 Nov 23 04:16:24 crc kubenswrapper[4751]: I1123 04:16:24.709268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" event={"ID":"ac73cf10-7aa5-4958-9238-d5473d368ceb","Type":"ContainerStarted","Data":"d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634"} Nov 23 04:16:25 crc kubenswrapper[4751]: I1123 04:16:25.725463 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" event={"ID":"ac73cf10-7aa5-4958-9238-d5473d368ceb","Type":"ContainerStarted","Data":"81a8d9d433076ae2f7848637d23cff071343b803e9a863d96d4e2b467ec4d71e"} Nov 23 04:16:25 crc kubenswrapper[4751]: I1123 04:16:25.755015 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" podStartSLOduration=2.319307019 podStartE2EDuration="2.754991625s" podCreationTimestamp="2025-11-23 04:16:23 +0000 UTC" firstStartedPulling="2025-11-23 04:16:24.475262116 +0000 UTC m=+1280.668933475" lastFinishedPulling="2025-11-23 04:16:24.910946682 +0000 UTC m=+1281.104618081" observedRunningTime="2025-11-23 04:16:25.750018534 +0000 UTC m=+1281.943689933" watchObservedRunningTime="2025-11-23 04:16:25.754991625 +0000 UTC m=+1281.948662994" Nov 23 04:16:38 crc kubenswrapper[4751]: I1123 04:16:38.114330 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:16:38 crc kubenswrapper[4751]: I1123 04:16:38.115038 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.115039 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.115560 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.115606 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.116340 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.116425 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f" gracePeriod=600 Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.256454 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f" exitCode=0 Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.256519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f"} Nov 23 04:17:08 crc kubenswrapper[4751]: I1123 04:17:08.256564 4751 scope.go:117] "RemoveContainer" containerID="c3c3326b403c9822bff7bedc4dca6772d05fadb01a9321aa74a5bdfd193f9c8e" Nov 23 04:17:09 crc kubenswrapper[4751]: I1123 04:17:09.273145 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394"} Nov 23 04:17:18 crc kubenswrapper[4751]: I1123 04:17:18.731462 4751 scope.go:117] "RemoveContainer" containerID="41d80b213856f906a9a5f575e3873e0a726abaa8898d570d4813e6b208ad9315" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.097925 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.101148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.113006 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.246377 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sz29\" (UniqueName: \"kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.246496 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.246671 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.348836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.349215 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.349415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sz29\" (UniqueName: \"kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.349491 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.349704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.386982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sz29\" (UniqueName: \"kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29\") pod \"certified-operators-4xclv\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.437429 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:11 crc kubenswrapper[4751]: I1123 04:18:11.947850 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:11 crc kubenswrapper[4751]: W1123 04:18:11.952673 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e49e5d7_50c5_4db5_91c6_a50fa08805c4.slice/crio-d04321aea1ed23342113bd5b7f09d73ae7ca277215c4fe75ffb3ddd4d02bee73 WatchSource:0}: Error finding container d04321aea1ed23342113bd5b7f09d73ae7ca277215c4fe75ffb3ddd4d02bee73: Status 404 returned error can't find the container with id d04321aea1ed23342113bd5b7f09d73ae7ca277215c4fe75ffb3ddd4d02bee73 Nov 23 04:18:12 crc kubenswrapper[4751]: I1123 04:18:12.033403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerStarted","Data":"d04321aea1ed23342113bd5b7f09d73ae7ca277215c4fe75ffb3ddd4d02bee73"} Nov 23 04:18:13 crc kubenswrapper[4751]: I1123 04:18:13.051804 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerID="48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e" exitCode=0 Nov 23 04:18:13 crc kubenswrapper[4751]: I1123 04:18:13.051905 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerDied","Data":"48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e"} Nov 23 04:18:15 crc kubenswrapper[4751]: I1123 04:18:15.080575 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerID="1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296" exitCode=0 Nov 23 04:18:15 crc kubenswrapper[4751]: I1123 04:18:15.080672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerDied","Data":"1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296"} Nov 23 04:18:16 crc kubenswrapper[4751]: I1123 04:18:16.096072 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerStarted","Data":"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e"} Nov 23 04:18:16 crc kubenswrapper[4751]: I1123 04:18:16.117465 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xclv" podStartSLOduration=2.676547346 podStartE2EDuration="5.117450072s" podCreationTimestamp="2025-11-23 04:18:11 +0000 UTC" firstStartedPulling="2025-11-23 04:18:13.054323592 +0000 UTC m=+1389.247994981" lastFinishedPulling="2025-11-23 04:18:15.495226298 +0000 UTC m=+1391.688897707" observedRunningTime="2025-11-23 04:18:16.116056045 +0000 UTC m=+1392.309727414" watchObservedRunningTime="2025-11-23 04:18:16.117450072 +0000 UTC m=+1392.311121431" Nov 23 04:18:18 crc kubenswrapper[4751]: I1123 04:18:18.800751 4751 scope.go:117] "RemoveContainer" containerID="539108f2455714d29035d73da756a01b46a0859d989aea077908301113d08039" Nov 23 04:18:18 crc kubenswrapper[4751]: I1123 04:18:18.841151 4751 scope.go:117] "RemoveContainer" containerID="7c5504eb784fd63f833ea87510c6f8fa59ed1d7e2de7b5b9101fe4cdc91e1ea4" Nov 23 04:18:18 crc kubenswrapper[4751]: I1123 04:18:18.873810 4751 scope.go:117] "RemoveContainer" containerID="1dee67d97dbfdf8194c1fd9a53f6643b7a9747052f94be138bf4f11bf799e1e9" Nov 23 04:18:21 crc kubenswrapper[4751]: I1123 04:18:21.438791 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:21 crc kubenswrapper[4751]: I1123 04:18:21.439580 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:21 crc kubenswrapper[4751]: I1123 04:18:21.505731 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:22 crc kubenswrapper[4751]: I1123 04:18:22.248874 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:22 crc kubenswrapper[4751]: I1123 04:18:22.324917 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:24 crc kubenswrapper[4751]: I1123 04:18:24.193552 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xclv" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="registry-server" containerID="cri-o://f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e" gracePeriod=2 Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.668877 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.730463 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities\") pod \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.730512 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sz29\" (UniqueName: \"kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29\") pod \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.730630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content\") pod \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\" (UID: \"1e49e5d7-50c5-4db5-91c6-a50fa08805c4\") " Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.732794 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities" (OuterVolumeSpecName: "utilities") pod "1e49e5d7-50c5-4db5-91c6-a50fa08805c4" (UID: "1e49e5d7-50c5-4db5-91c6-a50fa08805c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.747677 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29" (OuterVolumeSpecName: "kube-api-access-8sz29") pod "1e49e5d7-50c5-4db5-91c6-a50fa08805c4" (UID: "1e49e5d7-50c5-4db5-91c6-a50fa08805c4"). InnerVolumeSpecName "kube-api-access-8sz29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.833504 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:24.833531 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sz29\" (UniqueName: \"kubernetes.io/projected/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-kube-api-access-8sz29\") on node \"crc\" DevicePath \"\"" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.135323 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e49e5d7-50c5-4db5-91c6-a50fa08805c4" (UID: "1e49e5d7-50c5-4db5-91c6-a50fa08805c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.142649 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e49e5d7-50c5-4db5-91c6-a50fa08805c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.210576 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerID="f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e" exitCode=0 Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.210623 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerDied","Data":"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e"} Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.210653 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xclv" event={"ID":"1e49e5d7-50c5-4db5-91c6-a50fa08805c4","Type":"ContainerDied","Data":"d04321aea1ed23342113bd5b7f09d73ae7ca277215c4fe75ffb3ddd4d02bee73"} Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.210674 4751 scope.go:117] "RemoveContainer" containerID="f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.210691 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xclv" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.271405 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.272755 4751 scope.go:117] "RemoveContainer" containerID="1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.287658 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xclv"] Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.308124 4751 scope.go:117] "RemoveContainer" containerID="48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.392129 4751 scope.go:117] "RemoveContainer" containerID="f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e" Nov 23 04:18:25 crc kubenswrapper[4751]: E1123 04:18:25.394049 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e\": container with ID starting with f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e not found: ID does not exist" containerID="f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.394134 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e"} err="failed to get container status \"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e\": rpc error: code = NotFound desc = could not find container \"f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e\": container with ID starting with f7f3ad023b2043c98113a0ab56fc921b480ed16cdef96ead078e6d0effaf8a9e not found: ID does not exist" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.394196 4751 scope.go:117] "RemoveContainer" containerID="1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296" Nov 23 04:18:25 crc kubenswrapper[4751]: E1123 04:18:25.394713 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296\": container with ID starting with 1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296 not found: ID does not exist" containerID="1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.394784 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296"} err="failed to get container status \"1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296\": rpc error: code = NotFound desc = could not find container \"1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296\": container with ID starting with 1afd7842acae80c3cdca5fa9a520932cbfb6020bd0d226046e894586b2fdf296 not found: ID does not exist" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.394909 4751 scope.go:117] "RemoveContainer" containerID="48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e" Nov 23 04:18:25 crc kubenswrapper[4751]: E1123 04:18:25.395433 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e\": container with ID starting with 48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e not found: ID does not exist" containerID="48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e" Nov 23 04:18:25 crc kubenswrapper[4751]: I1123 04:18:25.395469 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e"} err="failed to get container status \"48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e\": rpc error: code = NotFound desc = could not find container \"48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e\": container with ID starting with 48530532726bc364a699ef2577b4db6e92f8167f16ac2720a921eeba590bda8e not found: ID does not exist" Nov 23 04:18:26 crc kubenswrapper[4751]: I1123 04:18:26.656982 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" path="/var/lib/kubelet/pods/1e49e5d7-50c5-4db5-91c6-a50fa08805c4/volumes" Nov 23 04:19:08 crc kubenswrapper[4751]: I1123 04:19:08.114971 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:19:08 crc kubenswrapper[4751]: I1123 04:19:08.115563 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:19:19 crc kubenswrapper[4751]: I1123 04:19:19.024169 4751 scope.go:117] "RemoveContainer" containerID="d9023461fea54c9fbeac1c5b5d5d1b23fc656b57c9e16117903ed72186d512b7" Nov 23 04:19:19 crc kubenswrapper[4751]: I1123 04:19:19.053727 4751 scope.go:117] "RemoveContainer" containerID="50f11ef90b37dfe8f14d32269d395264b59592abc8355fd0b715de682bd7c3a4" Nov 23 04:19:19 crc kubenswrapper[4751]: I1123 04:19:19.092062 4751 scope.go:117] "RemoveContainer" containerID="73c422c1c8333e650173b7cf879fa479ac91e87b643e1c9fdab81b0995d459ca" Nov 23 04:19:19 crc kubenswrapper[4751]: I1123 04:19:19.128166 4751 scope.go:117] "RemoveContainer" containerID="195b151c5aa696889f3e769f98cd9ad3a11888fbd70724070ad788cd41955721" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.832992 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:34 crc kubenswrapper[4751]: E1123 04:19:34.833986 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="registry-server" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.834002 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="registry-server" Nov 23 04:19:34 crc kubenswrapper[4751]: E1123 04:19:34.834052 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="extract-content" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.834060 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="extract-content" Nov 23 04:19:34 crc kubenswrapper[4751]: E1123 04:19:34.834083 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="extract-utilities" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.834092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="extract-utilities" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.834562 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e49e5d7-50c5-4db5-91c6-a50fa08805c4" containerName="registry-server" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.836412 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.849418 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.913967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlvj\" (UniqueName: \"kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.914068 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:34 crc kubenswrapper[4751]: I1123 04:19:34.914119 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.015701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlvj\" (UniqueName: \"kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.015784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.015832 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.016374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.016718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.035884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlvj\" (UniqueName: \"kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj\") pod \"redhat-marketplace-fgx6c\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.164839 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:35 crc kubenswrapper[4751]: I1123 04:19:35.708782 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:36 crc kubenswrapper[4751]: I1123 04:19:36.027626 4751 generic.go:334] "Generic (PLEG): container finished" podID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerID="0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2" exitCode=0 Nov 23 04:19:36 crc kubenswrapper[4751]: I1123 04:19:36.027722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerDied","Data":"0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2"} Nov 23 04:19:36 crc kubenswrapper[4751]: I1123 04:19:36.027955 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerStarted","Data":"1ec396c969d29c9e69f78430d24bc0331a3f2e9a6542172397789f9c82ac85c8"} Nov 23 04:19:37 crc kubenswrapper[4751]: I1123 04:19:37.043095 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerStarted","Data":"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1"} Nov 23 04:19:38 crc kubenswrapper[4751]: I1123 04:19:38.053077 4751 generic.go:334] "Generic (PLEG): container finished" podID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerID="b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1" exitCode=0 Nov 23 04:19:38 crc kubenswrapper[4751]: I1123 04:19:38.053160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerDied","Data":"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1"} Nov 23 04:19:38 crc kubenswrapper[4751]: I1123 04:19:38.115132 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:19:38 crc kubenswrapper[4751]: I1123 04:19:38.115211 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:19:39 crc kubenswrapper[4751]: I1123 04:19:39.065164 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerStarted","Data":"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16"} Nov 23 04:19:39 crc kubenswrapper[4751]: I1123 04:19:39.138570 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgx6c" podStartSLOduration=2.7111127440000002 podStartE2EDuration="5.138549008s" podCreationTimestamp="2025-11-23 04:19:34 +0000 UTC" firstStartedPulling="2025-11-23 04:19:36.02924229 +0000 UTC m=+1472.222913649" lastFinishedPulling="2025-11-23 04:19:38.456678544 +0000 UTC m=+1474.650349913" observedRunningTime="2025-11-23 04:19:39.116747197 +0000 UTC m=+1475.310418556" watchObservedRunningTime="2025-11-23 04:19:39.138549008 +0000 UTC m=+1475.332220367" Nov 23 04:19:41 crc kubenswrapper[4751]: I1123 04:19:41.096055 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac73cf10-7aa5-4958-9238-d5473d368ceb" containerID="81a8d9d433076ae2f7848637d23cff071343b803e9a863d96d4e2b467ec4d71e" exitCode=0 Nov 23 04:19:41 crc kubenswrapper[4751]: I1123 04:19:41.096149 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" event={"ID":"ac73cf10-7aa5-4958-9238-d5473d368ceb","Type":"ContainerDied","Data":"81a8d9d433076ae2f7848637d23cff071343b803e9a863d96d4e2b467ec4d71e"} Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.555116 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.659812 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpmkr\" (UniqueName: \"kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr\") pod \"ac73cf10-7aa5-4958-9238-d5473d368ceb\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.659856 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory\") pod \"ac73cf10-7aa5-4958-9238-d5473d368ceb\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.660029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle\") pod \"ac73cf10-7aa5-4958-9238-d5473d368ceb\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.660059 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key\") pod \"ac73cf10-7aa5-4958-9238-d5473d368ceb\" (UID: \"ac73cf10-7aa5-4958-9238-d5473d368ceb\") " Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.665239 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr" (OuterVolumeSpecName: "kube-api-access-dpmkr") pod "ac73cf10-7aa5-4958-9238-d5473d368ceb" (UID: "ac73cf10-7aa5-4958-9238-d5473d368ceb"). InnerVolumeSpecName "kube-api-access-dpmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.665809 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ac73cf10-7aa5-4958-9238-d5473d368ceb" (UID: "ac73cf10-7aa5-4958-9238-d5473d368ceb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.686950 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory" (OuterVolumeSpecName: "inventory") pod "ac73cf10-7aa5-4958-9238-d5473d368ceb" (UID: "ac73cf10-7aa5-4958-9238-d5473d368ceb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.687071 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac73cf10-7aa5-4958-9238-d5473d368ceb" (UID: "ac73cf10-7aa5-4958-9238-d5473d368ceb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.762005 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpmkr\" (UniqueName: \"kubernetes.io/projected/ac73cf10-7aa5-4958-9238-d5473d368ceb-kube-api-access-dpmkr\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.762038 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.762050 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:42 crc kubenswrapper[4751]: I1123 04:19:42.762057 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac73cf10-7aa5-4958-9238-d5473d368ceb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.126044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" event={"ID":"ac73cf10-7aa5-4958-9238-d5473d368ceb","Type":"ContainerDied","Data":"d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634"} Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.126083 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.126116 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2af1062d524dc996cbce97dfa9757ac261e7d6e6db5a9547408b1e2811d8634" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.335463 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg"] Nov 23 04:19:43 crc kubenswrapper[4751]: E1123 04:19:43.335894 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac73cf10-7aa5-4958-9238-d5473d368ceb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.335912 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac73cf10-7aa5-4958-9238-d5473d368ceb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.336119 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac73cf10-7aa5-4958-9238-d5473d368ceb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.336764 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.347004 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.347207 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.349921 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.350692 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.378914 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg"] Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.476255 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnnb2\" (UniqueName: \"kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.476611 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.476728 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.578615 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.578719 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnnb2\" (UniqueName: \"kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.578750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.583058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.583062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.602857 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnnb2\" (UniqueName: \"kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99kcg\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:43 crc kubenswrapper[4751]: I1123 04:19:43.660210 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:19:44 crc kubenswrapper[4751]: I1123 04:19:44.225374 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg"] Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.149640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" event={"ID":"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649","Type":"ContainerStarted","Data":"3f2f5252d9d1379b697978e25dba00c7998ea919a30337d4ca242613933f1251"} Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.150025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" event={"ID":"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649","Type":"ContainerStarted","Data":"8c2234d8af05d559627b60ddfd5e53b50f32bf5b49c82b6bd38a348719cb18e6"} Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.165706 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.166684 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.180928 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" podStartSLOduration=1.709655738 podStartE2EDuration="2.18089883s" podCreationTimestamp="2025-11-23 04:19:43 +0000 UTC" firstStartedPulling="2025-11-23 04:19:44.233025015 +0000 UTC m=+1480.426696384" lastFinishedPulling="2025-11-23 04:19:44.704268117 +0000 UTC m=+1480.897939476" observedRunningTime="2025-11-23 04:19:45.164243845 +0000 UTC m=+1481.357915254" watchObservedRunningTime="2025-11-23 04:19:45.18089883 +0000 UTC m=+1481.374570219" Nov 23 04:19:45 crc kubenswrapper[4751]: I1123 04:19:45.225624 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:46 crc kubenswrapper[4751]: I1123 04:19:46.213923 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:46 crc kubenswrapper[4751]: I1123 04:19:46.286424 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.185272 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgx6c" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="registry-server" containerID="cri-o://670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16" gracePeriod=2 Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.705143 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.777155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlvj\" (UniqueName: \"kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj\") pod \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.777308 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities\") pod \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.777484 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content\") pod \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\" (UID: \"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b\") " Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.778379 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities" (OuterVolumeSpecName: "utilities") pod "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" (UID: "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.786475 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj" (OuterVolumeSpecName: "kube-api-access-9qlvj") pod "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" (UID: "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b"). InnerVolumeSpecName "kube-api-access-9qlvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.814711 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" (UID: "4bcd05c4-2b61-4ef2-9c88-ee52def3c55b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.879148 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.879183 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlvj\" (UniqueName: \"kubernetes.io/projected/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-kube-api-access-9qlvj\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:48 crc kubenswrapper[4751]: I1123 04:19:48.879193 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.198212 4751 generic.go:334] "Generic (PLEG): container finished" podID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerID="670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16" exitCode=0 Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.198259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerDied","Data":"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16"} Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.198288 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgx6c" event={"ID":"4bcd05c4-2b61-4ef2-9c88-ee52def3c55b","Type":"ContainerDied","Data":"1ec396c969d29c9e69f78430d24bc0331a3f2e9a6542172397789f9c82ac85c8"} Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.198309 4751 scope.go:117] "RemoveContainer" containerID="670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.198456 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgx6c" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.255185 4751 scope.go:117] "RemoveContainer" containerID="b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.256931 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.268759 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgx6c"] Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.279220 4751 scope.go:117] "RemoveContainer" containerID="0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.327873 4751 scope.go:117] "RemoveContainer" containerID="670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16" Nov 23 04:19:49 crc kubenswrapper[4751]: E1123 04:19:49.328794 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16\": container with ID starting with 670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16 not found: ID does not exist" containerID="670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.328953 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16"} err="failed to get container status \"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16\": rpc error: code = NotFound desc = could not find container \"670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16\": container with ID starting with 670bac3eaad7065bbcfd96c0b7c0508469fae9ece2fa1174cf465858d86d6f16 not found: ID does not exist" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.329013 4751 scope.go:117] "RemoveContainer" containerID="b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1" Nov 23 04:19:49 crc kubenswrapper[4751]: E1123 04:19:49.329474 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1\": container with ID starting with b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1 not found: ID does not exist" containerID="b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.329521 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1"} err="failed to get container status \"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1\": rpc error: code = NotFound desc = could not find container \"b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1\": container with ID starting with b95786a96c323eb1a6cbdc6540725c2de4559752d3f81281172d9c63e3b802d1 not found: ID does not exist" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.329551 4751 scope.go:117] "RemoveContainer" containerID="0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2" Nov 23 04:19:49 crc kubenswrapper[4751]: E1123 04:19:49.330798 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2\": container with ID starting with 0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2 not found: ID does not exist" containerID="0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2" Nov 23 04:19:49 crc kubenswrapper[4751]: I1123 04:19:49.330824 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2"} err="failed to get container status \"0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2\": rpc error: code = NotFound desc = could not find container \"0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2\": container with ID starting with 0d311dad4ccae0b71498f4df7c6879bce6e69628c01ba3f0eb6b150ff9f51df2 not found: ID does not exist" Nov 23 04:19:50 crc kubenswrapper[4751]: I1123 04:19:50.655258 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" path="/var/lib/kubelet/pods/4bcd05c4-2b61-4ef2-9c88-ee52def3c55b/volumes" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.114709 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.115236 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.115275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.115806 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.115858 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" gracePeriod=600 Nov 23 04:20:08 crc kubenswrapper[4751]: E1123 04:20:08.264947 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.423668 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" exitCode=0 Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.423707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394"} Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.423743 4751 scope.go:117] "RemoveContainer" containerID="f9afc4613e34fddd72c47e695fb68189a79f42c9d6a829aab1415dc9499eb82f" Nov 23 04:20:08 crc kubenswrapper[4751]: I1123 04:20:08.424501 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:20:08 crc kubenswrapper[4751]: E1123 04:20:08.424905 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.752896 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:09 crc kubenswrapper[4751]: E1123 04:20:09.753835 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="registry-server" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.753867 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="registry-server" Nov 23 04:20:09 crc kubenswrapper[4751]: E1123 04:20:09.753917 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="extract-content" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.753930 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="extract-content" Nov 23 04:20:09 crc kubenswrapper[4751]: E1123 04:20:09.753959 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="extract-utilities" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.753973 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="extract-utilities" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.754350 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcd05c4-2b61-4ef2-9c88-ee52def3c55b" containerName="registry-server" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.756849 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.783798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.918636 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.918751 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:09 crc kubenswrapper[4751]: I1123 04:20:09.918933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvhzz\" (UniqueName: \"kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.021072 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.021122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.021164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvhzz\" (UniqueName: \"kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.022126 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.022306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.043824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvhzz\" (UniqueName: \"kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz\") pod \"redhat-operators-qfqkp\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.085888 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.356091 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:10 crc kubenswrapper[4751]: I1123 04:20:10.446281 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerStarted","Data":"72f667ed0569c876ae454a43674c91d2809d0d4e4a606aba5ab047219e28b36f"} Nov 23 04:20:11 crc kubenswrapper[4751]: I1123 04:20:11.459546 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerID="b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7" exitCode=0 Nov 23 04:20:11 crc kubenswrapper[4751]: I1123 04:20:11.459615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerDied","Data":"b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7"} Nov 23 04:20:12 crc kubenswrapper[4751]: I1123 04:20:12.478517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerStarted","Data":"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a"} Nov 23 04:20:14 crc kubenswrapper[4751]: I1123 04:20:14.503850 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerID="19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a" exitCode=0 Nov 23 04:20:14 crc kubenswrapper[4751]: I1123 04:20:14.503951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerDied","Data":"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a"} Nov 23 04:20:15 crc kubenswrapper[4751]: I1123 04:20:15.518644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerStarted","Data":"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a"} Nov 23 04:20:15 crc kubenswrapper[4751]: I1123 04:20:15.558248 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qfqkp" podStartSLOduration=2.9855950570000003 podStartE2EDuration="6.558229948s" podCreationTimestamp="2025-11-23 04:20:09 +0000 UTC" firstStartedPulling="2025-11-23 04:20:11.463270758 +0000 UTC m=+1507.656942157" lastFinishedPulling="2025-11-23 04:20:15.035905659 +0000 UTC m=+1511.229577048" observedRunningTime="2025-11-23 04:20:15.547523468 +0000 UTC m=+1511.741194827" watchObservedRunningTime="2025-11-23 04:20:15.558229948 +0000 UTC m=+1511.751901297" Nov 23 04:20:19 crc kubenswrapper[4751]: I1123 04:20:19.224037 4751 scope.go:117] "RemoveContainer" containerID="e5b828d2434689aad86371b4d236350434d18b6f2987fd4086a6187cd6851426" Nov 23 04:20:19 crc kubenswrapper[4751]: I1123 04:20:19.247762 4751 scope.go:117] "RemoveContainer" containerID="6d428b9413aece6d1558e48b34e81533cd6182ab760056614e53148661e46225" Nov 23 04:20:20 crc kubenswrapper[4751]: I1123 04:20:20.086955 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:20 crc kubenswrapper[4751]: I1123 04:20:20.087245 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:21 crc kubenswrapper[4751]: I1123 04:20:21.167406 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qfqkp" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="registry-server" probeResult="failure" output=< Nov 23 04:20:21 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:20:21 crc kubenswrapper[4751]: > Nov 23 04:20:21 crc kubenswrapper[4751]: I1123 04:20:21.644841 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:20:21 crc kubenswrapper[4751]: E1123 04:20:21.645225 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:20:30 crc kubenswrapper[4751]: I1123 04:20:30.169062 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:30 crc kubenswrapper[4751]: I1123 04:20:30.256551 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:30 crc kubenswrapper[4751]: I1123 04:20:30.418686 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:31 crc kubenswrapper[4751]: I1123 04:20:31.691773 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qfqkp" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="registry-server" containerID="cri-o://f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a" gracePeriod=2 Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.249930 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.391967 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities\") pod \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.392445 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvhzz\" (UniqueName: \"kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz\") pod \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.392572 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content\") pod \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\" (UID: \"e8b050e0-d83c-496d-8156-dbfa8c2b130f\") " Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.393367 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities" (OuterVolumeSpecName: "utilities") pod "e8b050e0-d83c-496d-8156-dbfa8c2b130f" (UID: "e8b050e0-d83c-496d-8156-dbfa8c2b130f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.400684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz" (OuterVolumeSpecName: "kube-api-access-rvhzz") pod "e8b050e0-d83c-496d-8156-dbfa8c2b130f" (UID: "e8b050e0-d83c-496d-8156-dbfa8c2b130f"). InnerVolumeSpecName "kube-api-access-rvhzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.476814 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8b050e0-d83c-496d-8156-dbfa8c2b130f" (UID: "e8b050e0-d83c-496d-8156-dbfa8c2b130f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.495299 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.495563 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b050e0-d83c-496d-8156-dbfa8c2b130f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.495648 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvhzz\" (UniqueName: \"kubernetes.io/projected/e8b050e0-d83c-496d-8156-dbfa8c2b130f-kube-api-access-rvhzz\") on node \"crc\" DevicePath \"\"" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.708035 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerID="f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a" exitCode=0 Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.708169 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerDied","Data":"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a"} Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.708203 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qfqkp" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.708751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qfqkp" event={"ID":"e8b050e0-d83c-496d-8156-dbfa8c2b130f","Type":"ContainerDied","Data":"72f667ed0569c876ae454a43674c91d2809d0d4e4a606aba5ab047219e28b36f"} Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.708811 4751 scope.go:117] "RemoveContainer" containerID="f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.751981 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.761630 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qfqkp"] Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.768586 4751 scope.go:117] "RemoveContainer" containerID="19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.805714 4751 scope.go:117] "RemoveContainer" containerID="b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.852704 4751 scope.go:117] "RemoveContainer" containerID="f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a" Nov 23 04:20:32 crc kubenswrapper[4751]: E1123 04:20:32.853166 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a\": container with ID starting with f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a not found: ID does not exist" containerID="f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.853220 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a"} err="failed to get container status \"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a\": rpc error: code = NotFound desc = could not find container \"f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a\": container with ID starting with f30d18820a1519fe67609db0e0b08460230f48f92252c9c90714d6b32139767a not found: ID does not exist" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.853251 4751 scope.go:117] "RemoveContainer" containerID="19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a" Nov 23 04:20:32 crc kubenswrapper[4751]: E1123 04:20:32.853800 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a\": container with ID starting with 19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a not found: ID does not exist" containerID="19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.853837 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a"} err="failed to get container status \"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a\": rpc error: code = NotFound desc = could not find container \"19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a\": container with ID starting with 19329ea321bca0286f3539386a14894d48a97063e35ae68cdf437126ba4e385a not found: ID does not exist" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.853868 4751 scope.go:117] "RemoveContainer" containerID="b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7" Nov 23 04:20:32 crc kubenswrapper[4751]: E1123 04:20:32.854246 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7\": container with ID starting with b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7 not found: ID does not exist" containerID="b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7" Nov 23 04:20:32 crc kubenswrapper[4751]: I1123 04:20:32.854280 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7"} err="failed to get container status \"b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7\": rpc error: code = NotFound desc = could not find container \"b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7\": container with ID starting with b946841fa9fab31658ee032147ab1efd1a8eee5c9487395a4983c79576f55ac7 not found: ID does not exist" Nov 23 04:20:34 crc kubenswrapper[4751]: I1123 04:20:34.656989 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:20:34 crc kubenswrapper[4751]: I1123 04:20:34.657368 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" path="/var/lib/kubelet/pods/e8b050e0-d83c-496d-8156-dbfa8c2b130f/volumes" Nov 23 04:20:34 crc kubenswrapper[4751]: E1123 04:20:34.657613 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.070324 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vntp5"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.087873 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vhzdn"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.098718 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b477-account-create-jzg52"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.112985 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-61c3-account-create-x5dz4"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.124310 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-190a-account-create-nnc9s"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.132532 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b477-account-create-jzg52"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.140125 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pfqxb"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.147792 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vhzdn"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.157533 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vntp5"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.162375 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pfqxb"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.170951 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-190a-account-create-nnc9s"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.180441 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-61c3-account-create-x5dz4"] Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.660576 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0834e6ad-8a18-427f-a4d6-94bfed0574bf" path="/var/lib/kubelet/pods/0834e6ad-8a18-427f-a4d6-94bfed0574bf/volumes" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.661820 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bda3488-1b2c-4014-8eac-2abd8308af72" path="/var/lib/kubelet/pods/4bda3488-1b2c-4014-8eac-2abd8308af72/volumes" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.662915 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f950d97-cf8c-49c5-88a9-5ec34b3a71f2" path="/var/lib/kubelet/pods/7f950d97-cf8c-49c5-88a9-5ec34b3a71f2/volumes" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.663993 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96958e1-1100-47e1-a5e9-cf21ef25e4cb" path="/var/lib/kubelet/pods/a96958e1-1100-47e1-a5e9-cf21ef25e4cb/volumes" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.665950 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce2939ea-7439-4036-8964-12f56d55b9e3" path="/var/lib/kubelet/pods/ce2939ea-7439-4036-8964-12f56d55b9e3/volumes" Nov 23 04:20:40 crc kubenswrapper[4751]: I1123 04:20:40.667034 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3520f7c-fe55-4916-a621-82c88870e84f" path="/var/lib/kubelet/pods/d3520f7c-fe55-4916-a621-82c88870e84f/volumes" Nov 23 04:20:49 crc kubenswrapper[4751]: I1123 04:20:49.643885 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:20:49 crc kubenswrapper[4751]: E1123 04:20:49.644748 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:21:03 crc kubenswrapper[4751]: I1123 04:21:03.644919 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:21:03 crc kubenswrapper[4751]: E1123 04:21:03.646063 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:21:09 crc kubenswrapper[4751]: I1123 04:21:09.066507 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-sp5s2"] Nov 23 04:21:09 crc kubenswrapper[4751]: I1123 04:21:09.081623 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-sp5s2"] Nov 23 04:21:10 crc kubenswrapper[4751]: I1123 04:21:10.664726 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e010e2e6-3482-4511-8540-46aef4db130e" path="/var/lib/kubelet/pods/e010e2e6-3482-4511-8540-46aef4db130e/volumes" Nov 23 04:21:15 crc kubenswrapper[4751]: I1123 04:21:15.644665 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:21:15 crc kubenswrapper[4751]: E1123 04:21:15.646955 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.048314 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4882-account-create-qhczl"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.084428 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-cvnzp"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.093446 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-h5dhh"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.104364 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b14b-account-create-cpb55"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.114438 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-42b4-account-create-h4mjj"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.125137 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-cvnzp"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.134803 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4882-account-create-qhczl"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.142429 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-h5dhh"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.149694 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-42b4-account-create-h4mjj"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.155860 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cwkbl"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.161795 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b14b-account-create-cpb55"] Nov 23 04:21:17 crc kubenswrapper[4751]: I1123 04:21:17.167985 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cwkbl"] Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.661110 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0b797e-86af-4db9-b133-f12052f4a258" path="/var/lib/kubelet/pods/2e0b797e-86af-4db9-b133-f12052f4a258/volumes" Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.662215 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99795e69-f0fd-4764-94d1-45148eaed6f7" path="/var/lib/kubelet/pods/99795e69-f0fd-4764-94d1-45148eaed6f7/volumes" Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.662863 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a310f02b-1e9b-44c2-ac1e-39737a0123d7" path="/var/lib/kubelet/pods/a310f02b-1e9b-44c2-ac1e-39737a0123d7/volumes" Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.663435 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabbcf91-7bbc-41e0-9282-d89d88fa89b7" path="/var/lib/kubelet/pods/aabbcf91-7bbc-41e0-9282-d89d88fa89b7/volumes" Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.664555 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1953dab-c7b4-46bb-86a8-f83e7db63538" path="/var/lib/kubelet/pods/c1953dab-c7b4-46bb-86a8-f83e7db63538/volumes" Nov 23 04:21:18 crc kubenswrapper[4751]: I1123 04:21:18.665224 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5cc4054-b04a-4b60-be8b-8411ded0d63a" path="/var/lib/kubelet/pods/e5cc4054-b04a-4b60-be8b-8411ded0d63a/volumes" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.331585 4751 scope.go:117] "RemoveContainer" containerID="ee757f37e94742d8a5595f19f30415132dc013c44eb807e3e9d9f9f1abcf38dd" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.376925 4751 scope.go:117] "RemoveContainer" containerID="137bec19a10de6236ac0e4d2aaf3f6566d49f0dd64e545721aa3b4bfe6229f9e" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.418038 4751 scope.go:117] "RemoveContainer" containerID="2c8df6cc863eae0ce41d4eb9dcdf2a5cf4ab00ed8ca23385f8aa73f861310c87" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.466138 4751 scope.go:117] "RemoveContainer" containerID="facab3d436fcaf27193843c3c3d9dd3d81eeb6b6b11e2249833cc680306bdf79" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.508968 4751 scope.go:117] "RemoveContainer" containerID="13581f3e4c9936555d4b51bb66c25d50f604ff438025d32fbf60369ea076baf4" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.539923 4751 scope.go:117] "RemoveContainer" containerID="747a6f7ad6c49c2a9483351c62c541657f001f84148c3b8686467abef313fdcc" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.589718 4751 scope.go:117] "RemoveContainer" containerID="378a3d663fed2079099af1d536142d6a158cf16947e55feab5a0106c770fca45" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.619216 4751 scope.go:117] "RemoveContainer" containerID="1e2e3bf3d1d24c14593e0480e840d7f1c6082486211b6d6f0b66539c7f09fcd6" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.652578 4751 scope.go:117] "RemoveContainer" containerID="64a9823b055807457ce1b06e0a43e2cd6581d11f4b8c293d31d1af81a8b334a3" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.676034 4751 scope.go:117] "RemoveContainer" containerID="94ccf6054e5174c7992f340b04aae05ee6c3d883d6f1ecab4cecf4bad63d8d5c" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.700455 4751 scope.go:117] "RemoveContainer" containerID="997c76a04dce1895d439e37b162912bd81360b0769a1056028594f377ddebcc0" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.750356 4751 scope.go:117] "RemoveContainer" containerID="caed715fdb8987c75e6718f4327c56d7c6621ff0623eff8c1af1b32336fc825f" Nov 23 04:21:19 crc kubenswrapper[4751]: I1123 04:21:19.778176 4751 scope.go:117] "RemoveContainer" containerID="61198fb5ac0ba5fc7016d0803f024a6a72654918a96a2e383a831930926cd44b" Nov 23 04:21:21 crc kubenswrapper[4751]: I1123 04:21:21.037672 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2pg58"] Nov 23 04:21:21 crc kubenswrapper[4751]: I1123 04:21:21.044399 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2pg58"] Nov 23 04:21:22 crc kubenswrapper[4751]: I1123 04:21:22.657563 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30448e27-081b-401d-abe4-b454365c5831" path="/var/lib/kubelet/pods/30448e27-081b-401d-abe4-b454365c5831/volumes" Nov 23 04:21:30 crc kubenswrapper[4751]: I1123 04:21:30.644595 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:21:30 crc kubenswrapper[4751]: E1123 04:21:30.645501 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:21:33 crc kubenswrapper[4751]: I1123 04:21:33.469702 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" containerID="3f2f5252d9d1379b697978e25dba00c7998ea919a30337d4ca242613933f1251" exitCode=0 Nov 23 04:21:33 crc kubenswrapper[4751]: I1123 04:21:33.470313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" event={"ID":"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649","Type":"ContainerDied","Data":"3f2f5252d9d1379b697978e25dba00c7998ea919a30337d4ca242613933f1251"} Nov 23 04:21:34 crc kubenswrapper[4751]: I1123 04:21:34.947225 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.064279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnnb2\" (UniqueName: \"kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2\") pod \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.064471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key\") pod \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.064521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory\") pod \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\" (UID: \"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649\") " Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.071258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2" (OuterVolumeSpecName: "kube-api-access-mnnb2") pod "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" (UID: "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649"). InnerVolumeSpecName "kube-api-access-mnnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.100276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory" (OuterVolumeSpecName: "inventory") pod "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" (UID: "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.102658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" (UID: "5a46c73e-f53a-4bcc-8a3a-d5982ecc6649"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.166796 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnnb2\" (UniqueName: \"kubernetes.io/projected/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-kube-api-access-mnnb2\") on node \"crc\" DevicePath \"\"" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.166833 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.166852 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a46c73e-f53a-4bcc-8a3a-d5982ecc6649-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.503757 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" event={"ID":"5a46c73e-f53a-4bcc-8a3a-d5982ecc6649","Type":"ContainerDied","Data":"8c2234d8af05d559627b60ddfd5e53b50f32bf5b49c82b6bd38a348719cb18e6"} Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.503818 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c2234d8af05d559627b60ddfd5e53b50f32bf5b49c82b6bd38a348719cb18e6" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.503864 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99kcg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.625995 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg"] Nov 23 04:21:35 crc kubenswrapper[4751]: E1123 04:21:35.626748 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.626769 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 04:21:35 crc kubenswrapper[4751]: E1123 04:21:35.626785 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="registry-server" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.626795 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="registry-server" Nov 23 04:21:35 crc kubenswrapper[4751]: E1123 04:21:35.626833 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="extract-content" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.626842 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="extract-content" Nov 23 04:21:35 crc kubenswrapper[4751]: E1123 04:21:35.626868 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="extract-utilities" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.626877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="extract-utilities" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.627108 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a46c73e-f53a-4bcc-8a3a-d5982ecc6649" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.627127 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b050e0-d83c-496d-8156-dbfa8c2b130f" containerName="registry-server" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.627892 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.631794 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.631842 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.632065 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.633852 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.633988 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg"] Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.779891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.780978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcw2\" (UniqueName: \"kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.781269 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.883437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxcw2\" (UniqueName: \"kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.883588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.883688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.893225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.895894 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.903058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxcw2\" (UniqueName: \"kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-9xskg\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:35 crc kubenswrapper[4751]: I1123 04:21:35.950472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:21:36 crc kubenswrapper[4751]: I1123 04:21:36.416472 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg"] Nov 23 04:21:36 crc kubenswrapper[4751]: I1123 04:21:36.421576 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:21:36 crc kubenswrapper[4751]: I1123 04:21:36.512611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" event={"ID":"d8c7c9fe-7d35-413c-8738-31ec126e8d80","Type":"ContainerStarted","Data":"f1ac3178e69fe340685441d6d380548fc22607a8e77261eaa2fa5f53482354d5"} Nov 23 04:21:37 crc kubenswrapper[4751]: I1123 04:21:37.532067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" event={"ID":"d8c7c9fe-7d35-413c-8738-31ec126e8d80","Type":"ContainerStarted","Data":"777704724ddc89496f7fc7ae86305872f3829a5269cd1cd3bc51ba579e637071"} Nov 23 04:21:37 crc kubenswrapper[4751]: I1123 04:21:37.567233 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" podStartSLOduration=2.058175223 podStartE2EDuration="2.567200954s" podCreationTimestamp="2025-11-23 04:21:35 +0000 UTC" firstStartedPulling="2025-11-23 04:21:36.421238275 +0000 UTC m=+1592.614909634" lastFinishedPulling="2025-11-23 04:21:36.930263966 +0000 UTC m=+1593.123935365" observedRunningTime="2025-11-23 04:21:37.557467059 +0000 UTC m=+1593.751138478" watchObservedRunningTime="2025-11-23 04:21:37.567200954 +0000 UTC m=+1593.760872353" Nov 23 04:21:41 crc kubenswrapper[4751]: I1123 04:21:41.645167 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:21:41 crc kubenswrapper[4751]: E1123 04:21:41.646379 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:21:53 crc kubenswrapper[4751]: I1123 04:21:53.644862 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:21:53 crc kubenswrapper[4751]: E1123 04:21:53.645522 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:22:02 crc kubenswrapper[4751]: I1123 04:22:02.051587 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vzfxr"] Nov 23 04:22:02 crc kubenswrapper[4751]: I1123 04:22:02.058560 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vzfxr"] Nov 23 04:22:02 crc kubenswrapper[4751]: I1123 04:22:02.662797 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402436df-c2b0-435a-8fed-4d88a3af1e40" path="/var/lib/kubelet/pods/402436df-c2b0-435a-8fed-4d88a3af1e40/volumes" Nov 23 04:22:03 crc kubenswrapper[4751]: I1123 04:22:03.064543 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jkfd7"] Nov 23 04:22:03 crc kubenswrapper[4751]: I1123 04:22:03.074436 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jkfd7"] Nov 23 04:22:04 crc kubenswrapper[4751]: I1123 04:22:04.025673 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hhmh2"] Nov 23 04:22:04 crc kubenswrapper[4751]: I1123 04:22:04.031932 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hhmh2"] Nov 23 04:22:04 crc kubenswrapper[4751]: I1123 04:22:04.659299 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4181f6c-4f0a-41fb-af82-f7f10f85c117" path="/var/lib/kubelet/pods/d4181f6c-4f0a-41fb-af82-f7f10f85c117/volumes" Nov 23 04:22:04 crc kubenswrapper[4751]: I1123 04:22:04.660941 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0ce204-8f54-4c8c-98a6-f18f36873339" path="/var/lib/kubelet/pods/ee0ce204-8f54-4c8c-98a6-f18f36873339/volumes" Nov 23 04:22:05 crc kubenswrapper[4751]: I1123 04:22:05.040243 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-q65sd"] Nov 23 04:22:05 crc kubenswrapper[4751]: I1123 04:22:05.054257 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-q65sd"] Nov 23 04:22:06 crc kubenswrapper[4751]: I1123 04:22:06.665392 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffd47c6-ed23-4bc3-be63-dd817807dc3e" path="/var/lib/kubelet/pods/4ffd47c6-ed23-4bc3-be63-dd817807dc3e/volumes" Nov 23 04:22:08 crc kubenswrapper[4751]: I1123 04:22:08.647871 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:22:08 crc kubenswrapper[4751]: E1123 04:22:08.649072 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.035664 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-slzk9"] Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.045378 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-slzk9"] Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.147499 4751 scope.go:117] "RemoveContainer" containerID="52fb127f56ef4f0527086e4443a35cad61449f1ed141487b4b474cee1baaeb50" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.218126 4751 scope.go:117] "RemoveContainer" containerID="ea8e5ba87f241184fa7a06b0e02fc44c9e26afc85a9c23c55a4b55105a6208fa" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.276526 4751 scope.go:117] "RemoveContainer" containerID="2259914a36e96ca06baa494295879beb76786d10ab91f69b144288b95aac8399" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.323604 4751 scope.go:117] "RemoveContainer" containerID="965f7e2ca2d2e35b337bbf8191496d92baeed52f1c850233659f271ae60e5e82" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.363597 4751 scope.go:117] "RemoveContainer" containerID="5a1fe356fe3d28a8621c54b1ab73ba25e8b4f353169c54b74fee6f922bd97365" Nov 23 04:22:20 crc kubenswrapper[4751]: I1123 04:22:20.666799 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ead28c-46bc-4415-a35c-1d3d8de722dd" path="/var/lib/kubelet/pods/37ead28c-46bc-4415-a35c-1d3d8de722dd/volumes" Nov 23 04:22:22 crc kubenswrapper[4751]: I1123 04:22:22.644940 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:22:22 crc kubenswrapper[4751]: E1123 04:22:22.645508 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:22:34 crc kubenswrapper[4751]: I1123 04:22:34.654492 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:22:34 crc kubenswrapper[4751]: E1123 04:22:34.655340 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:22:44 crc kubenswrapper[4751]: I1123 04:22:44.051104 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gr4mz"] Nov 23 04:22:44 crc kubenswrapper[4751]: I1123 04:22:44.064463 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gr4mz"] Nov 23 04:22:44 crc kubenswrapper[4751]: I1123 04:22:44.661623 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d838efd-6302-4350-a91d-a04a9c37f699" path="/var/lib/kubelet/pods/5d838efd-6302-4350-a91d-a04a9c37f699/volumes" Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.038217 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-s24br"] Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.052008 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-grvjd"] Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.060456 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7892-account-create-5nvvr"] Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.068025 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-s24br"] Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.075971 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-grvjd"] Nov 23 04:22:45 crc kubenswrapper[4751]: I1123 04:22:45.083683 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7892-account-create-5nvvr"] Nov 23 04:22:46 crc kubenswrapper[4751]: I1123 04:22:46.658025 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e06221-29e8-46f0-9cb2-5c1dfcd166cd" path="/var/lib/kubelet/pods/86e06221-29e8-46f0-9cb2-5c1dfcd166cd/volumes" Nov 23 04:22:46 crc kubenswrapper[4751]: I1123 04:22:46.658962 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e20f20-d357-4381-b150-bd0ca23cee86" path="/var/lib/kubelet/pods/c9e20f20-d357-4381-b150-bd0ca23cee86/volumes" Nov 23 04:22:46 crc kubenswrapper[4751]: I1123 04:22:46.659713 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd9da06-fe98-41d1-8e37-8207665dad25" path="/var/lib/kubelet/pods/dcd9da06-fe98-41d1-8e37-8207665dad25/volumes" Nov 23 04:22:48 crc kubenswrapper[4751]: I1123 04:22:48.645443 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:22:48 crc kubenswrapper[4751]: E1123 04:22:48.645999 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:22:51 crc kubenswrapper[4751]: I1123 04:22:51.030586 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1594-account-create-98rkq"] Nov 23 04:22:51 crc kubenswrapper[4751]: I1123 04:22:51.038071 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-49c1-account-create-7k8mk"] Nov 23 04:22:51 crc kubenswrapper[4751]: I1123 04:22:51.047060 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-49c1-account-create-7k8mk"] Nov 23 04:22:51 crc kubenswrapper[4751]: I1123 04:22:51.055281 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1594-account-create-98rkq"] Nov 23 04:22:52 crc kubenswrapper[4751]: I1123 04:22:52.655262 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b32816c-6aea-448d-bc18-38a3206ae2d3" path="/var/lib/kubelet/pods/1b32816c-6aea-448d-bc18-38a3206ae2d3/volumes" Nov 23 04:22:52 crc kubenswrapper[4751]: I1123 04:22:52.656218 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4e0e01-0fde-4d72-8693-f3ac9edde707" path="/var/lib/kubelet/pods/5d4e0e01-0fde-4d72-8693-f3ac9edde707/volumes" Nov 23 04:22:54 crc kubenswrapper[4751]: I1123 04:22:54.286793 4751 generic.go:334] "Generic (PLEG): container finished" podID="d8c7c9fe-7d35-413c-8738-31ec126e8d80" containerID="777704724ddc89496f7fc7ae86305872f3829a5269cd1cd3bc51ba579e637071" exitCode=0 Nov 23 04:22:54 crc kubenswrapper[4751]: I1123 04:22:54.286929 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" event={"ID":"d8c7c9fe-7d35-413c-8738-31ec126e8d80","Type":"ContainerDied","Data":"777704724ddc89496f7fc7ae86305872f3829a5269cd1cd3bc51ba579e637071"} Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.783629 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.900506 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxcw2\" (UniqueName: \"kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2\") pod \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.900612 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory\") pod \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.900660 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key\") pod \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\" (UID: \"d8c7c9fe-7d35-413c-8738-31ec126e8d80\") " Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.911699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2" (OuterVolumeSpecName: "kube-api-access-qxcw2") pod "d8c7c9fe-7d35-413c-8738-31ec126e8d80" (UID: "d8c7c9fe-7d35-413c-8738-31ec126e8d80"). InnerVolumeSpecName "kube-api-access-qxcw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.945733 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory" (OuterVolumeSpecName: "inventory") pod "d8c7c9fe-7d35-413c-8738-31ec126e8d80" (UID: "d8c7c9fe-7d35-413c-8738-31ec126e8d80"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:22:55 crc kubenswrapper[4751]: I1123 04:22:55.971970 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d8c7c9fe-7d35-413c-8738-31ec126e8d80" (UID: "d8c7c9fe-7d35-413c-8738-31ec126e8d80"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.002896 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.002937 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxcw2\" (UniqueName: \"kubernetes.io/projected/d8c7c9fe-7d35-413c-8738-31ec126e8d80-kube-api-access-qxcw2\") on node \"crc\" DevicePath \"\"" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.002954 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8c7c9fe-7d35-413c-8738-31ec126e8d80-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.317127 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" event={"ID":"d8c7c9fe-7d35-413c-8738-31ec126e8d80","Type":"ContainerDied","Data":"f1ac3178e69fe340685441d6d380548fc22607a8e77261eaa2fa5f53482354d5"} Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.317202 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ac3178e69fe340685441d6d380548fc22607a8e77261eaa2fa5f53482354d5" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.317222 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-9xskg" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.429279 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v"] Nov 23 04:22:56 crc kubenswrapper[4751]: E1123 04:22:56.430077 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c7c9fe-7d35-413c-8738-31ec126e8d80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.430193 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c7c9fe-7d35-413c-8738-31ec126e8d80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.430621 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c7c9fe-7d35-413c-8738-31ec126e8d80" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.431683 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.436157 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.436312 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.436469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.436598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.440043 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v"] Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.619617 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zmr\" (UniqueName: \"kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.619688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.619854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.721302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78zmr\" (UniqueName: \"kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.721455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.721594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.726966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.726966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.754528 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78zmr\" (UniqueName: \"kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:56 crc kubenswrapper[4751]: I1123 04:22:56.772058 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:22:57 crc kubenswrapper[4751]: I1123 04:22:57.330689 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v"] Nov 23 04:22:58 crc kubenswrapper[4751]: I1123 04:22:58.341995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" event={"ID":"8e5d5738-2df6-456a-b038-9605e0da3b66","Type":"ContainerStarted","Data":"e6cca530a4ffb32f51340b8b0474a8c9adce3dab64826ca3d779e415b043f824"} Nov 23 04:22:58 crc kubenswrapper[4751]: I1123 04:22:58.344126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" event={"ID":"8e5d5738-2df6-456a-b038-9605e0da3b66","Type":"ContainerStarted","Data":"9c70c87e11e545da3a00009e3cd8b69526b4e00ca031ec8fac3838aee12b0a3d"} Nov 23 04:22:58 crc kubenswrapper[4751]: I1123 04:22:58.379429 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" podStartSLOduration=1.874899923 podStartE2EDuration="2.379398511s" podCreationTimestamp="2025-11-23 04:22:56 +0000 UTC" firstStartedPulling="2025-11-23 04:22:57.331240378 +0000 UTC m=+1673.524911737" lastFinishedPulling="2025-11-23 04:22:57.835738946 +0000 UTC m=+1674.029410325" observedRunningTime="2025-11-23 04:22:58.359953571 +0000 UTC m=+1674.553624970" watchObservedRunningTime="2025-11-23 04:22:58.379398511 +0000 UTC m=+1674.573069920" Nov 23 04:23:01 crc kubenswrapper[4751]: I1123 04:23:01.644845 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:23:01 crc kubenswrapper[4751]: E1123 04:23:01.645293 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:23:03 crc kubenswrapper[4751]: I1123 04:23:03.398786 4751 generic.go:334] "Generic (PLEG): container finished" podID="8e5d5738-2df6-456a-b038-9605e0da3b66" containerID="e6cca530a4ffb32f51340b8b0474a8c9adce3dab64826ca3d779e415b043f824" exitCode=0 Nov 23 04:23:03 crc kubenswrapper[4751]: I1123 04:23:03.398841 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" event={"ID":"8e5d5738-2df6-456a-b038-9605e0da3b66","Type":"ContainerDied","Data":"e6cca530a4ffb32f51340b8b0474a8c9adce3dab64826ca3d779e415b043f824"} Nov 23 04:23:04 crc kubenswrapper[4751]: I1123 04:23:04.925343 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.102184 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory\") pod \"8e5d5738-2df6-456a-b038-9605e0da3b66\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.102769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78zmr\" (UniqueName: \"kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr\") pod \"8e5d5738-2df6-456a-b038-9605e0da3b66\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.102829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key\") pod \"8e5d5738-2df6-456a-b038-9605e0da3b66\" (UID: \"8e5d5738-2df6-456a-b038-9605e0da3b66\") " Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.110289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr" (OuterVolumeSpecName: "kube-api-access-78zmr") pod "8e5d5738-2df6-456a-b038-9605e0da3b66" (UID: "8e5d5738-2df6-456a-b038-9605e0da3b66"). InnerVolumeSpecName "kube-api-access-78zmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.149373 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8e5d5738-2df6-456a-b038-9605e0da3b66" (UID: "8e5d5738-2df6-456a-b038-9605e0da3b66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.153406 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory" (OuterVolumeSpecName: "inventory") pod "8e5d5738-2df6-456a-b038-9605e0da3b66" (UID: "8e5d5738-2df6-456a-b038-9605e0da3b66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.205908 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.205987 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8e5d5738-2df6-456a-b038-9605e0da3b66-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.206008 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78zmr\" (UniqueName: \"kubernetes.io/projected/8e5d5738-2df6-456a-b038-9605e0da3b66-kube-api-access-78zmr\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.421737 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.422012 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v" event={"ID":"8e5d5738-2df6-456a-b038-9605e0da3b66","Type":"ContainerDied","Data":"9c70c87e11e545da3a00009e3cd8b69526b4e00ca031ec8fac3838aee12b0a3d"} Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.422051 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c70c87e11e545da3a00009e3cd8b69526b4e00ca031ec8fac3838aee12b0a3d" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.506960 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp"] Nov 23 04:23:05 crc kubenswrapper[4751]: E1123 04:23:05.507509 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5d5738-2df6-456a-b038-9605e0da3b66" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.507531 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5d5738-2df6-456a-b038-9605e0da3b66" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.507753 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5d5738-2df6-456a-b038-9605e0da3b66" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.508642 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.512290 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.512890 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.513247 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.513583 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.514146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.514232 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.514494 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nd2b\" (UniqueName: \"kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.523651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp"] Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.616809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nd2b\" (UniqueName: \"kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.617128 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.617193 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.623196 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.634574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.636321 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nd2b\" (UniqueName: \"kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dm6wp\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:05 crc kubenswrapper[4751]: I1123 04:23:05.835528 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:06 crc kubenswrapper[4751]: I1123 04:23:06.466894 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp"] Nov 23 04:23:07 crc kubenswrapper[4751]: I1123 04:23:07.441944 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" event={"ID":"42446795-8c4f-4b34-b87c-63fc5306226e","Type":"ContainerStarted","Data":"34eb8c1b4e5f59584c054313ad3eefa2fd246fd3d645da8b7b0a60f406a6cced"} Nov 23 04:23:07 crc kubenswrapper[4751]: I1123 04:23:07.442254 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" event={"ID":"42446795-8c4f-4b34-b87c-63fc5306226e","Type":"ContainerStarted","Data":"d6678ba265c0d03df9c82922549fe54458e9686c036ba2a6434d85821b308dee"} Nov 23 04:23:07 crc kubenswrapper[4751]: I1123 04:23:07.465904 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" podStartSLOduration=1.9082600730000001 podStartE2EDuration="2.465879574s" podCreationTimestamp="2025-11-23 04:23:05 +0000 UTC" firstStartedPulling="2025-11-23 04:23:06.468569904 +0000 UTC m=+1682.662241263" lastFinishedPulling="2025-11-23 04:23:07.026189375 +0000 UTC m=+1683.219860764" observedRunningTime="2025-11-23 04:23:07.455866171 +0000 UTC m=+1683.649537530" watchObservedRunningTime="2025-11-23 04:23:07.465879574 +0000 UTC m=+1683.659550963" Nov 23 04:23:13 crc kubenswrapper[4751]: I1123 04:23:13.645021 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:23:13 crc kubenswrapper[4751]: E1123 04:23:13.645853 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.522822 4751 scope.go:117] "RemoveContainer" containerID="b9b02b6d677c06af8c748630796908ef7a4c3dcdb806df3c262da5dbeddbb6ea" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.555724 4751 scope.go:117] "RemoveContainer" containerID="2e6b30985f56bd4b211e2b2cb029fd41c57945cddd96e43ddc655e22bb7a9e68" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.613065 4751 scope.go:117] "RemoveContainer" containerID="ecbd06a2d2acab5fd51b7c394f05aff59c3893043fe24734404a3e72e6ef98ba" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.664861 4751 scope.go:117] "RemoveContainer" containerID="3854bd35c015d29f7d785f3af44bcea5fe2d4e756be8f7308bb8cd96a5e2c9e5" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.730823 4751 scope.go:117] "RemoveContainer" containerID="7b7204f054ca8264d205087aef66a1eb3b534994228996dd4a8f8d5bcfdb8d64" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.758779 4751 scope.go:117] "RemoveContainer" containerID="57f3ee99f361b849510efc691b8eb2ffffd155cdc5b5457dc8c3c0d9d5761a8b" Nov 23 04:23:20 crc kubenswrapper[4751]: I1123 04:23:20.799196 4751 scope.go:117] "RemoveContainer" containerID="90b46d60984a3cb178bc04fc5c8d7866ccea8b20f7ae27a3ad17072d64e1bad4" Nov 23 04:23:21 crc kubenswrapper[4751]: I1123 04:23:21.043320 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2l582"] Nov 23 04:23:21 crc kubenswrapper[4751]: I1123 04:23:21.051991 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2l582"] Nov 23 04:23:22 crc kubenswrapper[4751]: I1123 04:23:22.656829 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94895060-a23e-4768-b800-3ca2557264fd" path="/var/lib/kubelet/pods/94895060-a23e-4768-b800-3ca2557264fd/volumes" Nov 23 04:23:25 crc kubenswrapper[4751]: I1123 04:23:25.647882 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:23:25 crc kubenswrapper[4751]: E1123 04:23:25.650270 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:23:40 crc kubenswrapper[4751]: I1123 04:23:40.644586 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:23:40 crc kubenswrapper[4751]: E1123 04:23:40.645438 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:23:43 crc kubenswrapper[4751]: I1123 04:23:43.049956 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2c7qw"] Nov 23 04:23:43 crc kubenswrapper[4751]: I1123 04:23:43.060724 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2c7qw"] Nov 23 04:23:44 crc kubenswrapper[4751]: I1123 04:23:44.660205 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a210f151-b9cb-46c5-8493-0e6b9629b117" path="/var/lib/kubelet/pods/a210f151-b9cb-46c5-8493-0e6b9629b117/volumes" Nov 23 04:23:46 crc kubenswrapper[4751]: I1123 04:23:46.037486 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8gf78"] Nov 23 04:23:46 crc kubenswrapper[4751]: I1123 04:23:46.053488 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-8gf78"] Nov 23 04:23:46 crc kubenswrapper[4751]: I1123 04:23:46.655471 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f577aed3-fab8-4a7e-9beb-4cb6d472530b" path="/var/lib/kubelet/pods/f577aed3-fab8-4a7e-9beb-4cb6d472530b/volumes" Nov 23 04:23:49 crc kubenswrapper[4751]: I1123 04:23:49.916710 4751 generic.go:334] "Generic (PLEG): container finished" podID="42446795-8c4f-4b34-b87c-63fc5306226e" containerID="34eb8c1b4e5f59584c054313ad3eefa2fd246fd3d645da8b7b0a60f406a6cced" exitCode=0 Nov 23 04:23:49 crc kubenswrapper[4751]: I1123 04:23:49.916839 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" event={"ID":"42446795-8c4f-4b34-b87c-63fc5306226e","Type":"ContainerDied","Data":"34eb8c1b4e5f59584c054313ad3eefa2fd246fd3d645da8b7b0a60f406a6cced"} Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.407954 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.555049 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nd2b\" (UniqueName: \"kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b\") pod \"42446795-8c4f-4b34-b87c-63fc5306226e\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.555117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory\") pod \"42446795-8c4f-4b34-b87c-63fc5306226e\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.555200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key\") pod \"42446795-8c4f-4b34-b87c-63fc5306226e\" (UID: \"42446795-8c4f-4b34-b87c-63fc5306226e\") " Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.563836 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b" (OuterVolumeSpecName: "kube-api-access-5nd2b") pod "42446795-8c4f-4b34-b87c-63fc5306226e" (UID: "42446795-8c4f-4b34-b87c-63fc5306226e"). InnerVolumeSpecName "kube-api-access-5nd2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.588952 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory" (OuterVolumeSpecName: "inventory") pod "42446795-8c4f-4b34-b87c-63fc5306226e" (UID: "42446795-8c4f-4b34-b87c-63fc5306226e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.595550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "42446795-8c4f-4b34-b87c-63fc5306226e" (UID: "42446795-8c4f-4b34-b87c-63fc5306226e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.657572 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nd2b\" (UniqueName: \"kubernetes.io/projected/42446795-8c4f-4b34-b87c-63fc5306226e-kube-api-access-5nd2b\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.657602 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.657611 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/42446795-8c4f-4b34-b87c-63fc5306226e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.944089 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" event={"ID":"42446795-8c4f-4b34-b87c-63fc5306226e","Type":"ContainerDied","Data":"d6678ba265c0d03df9c82922549fe54458e9686c036ba2a6434d85821b308dee"} Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.944171 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6678ba265c0d03df9c82922549fe54458e9686c036ba2a6434d85821b308dee" Nov 23 04:23:51 crc kubenswrapper[4751]: I1123 04:23:51.944279 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dm6wp" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.043339 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x"] Nov 23 04:23:52 crc kubenswrapper[4751]: E1123 04:23:52.043897 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42446795-8c4f-4b34-b87c-63fc5306226e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.043918 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="42446795-8c4f-4b34-b87c-63fc5306226e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.044226 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="42446795-8c4f-4b34-b87c-63fc5306226e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.045449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.048519 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.049164 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.049403 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.049756 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.081095 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x"] Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.168266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.168338 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.168420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2ch\" (UniqueName: \"kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.270108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.270160 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.270214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2ch\" (UniqueName: \"kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.280082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.280538 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.288763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2ch\" (UniqueName: \"kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.373763 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:23:52 crc kubenswrapper[4751]: I1123 04:23:52.982483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x"] Nov 23 04:23:53 crc kubenswrapper[4751]: I1123 04:23:53.974933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" event={"ID":"ddafc7c0-5c18-49f0-b609-f68959f5bc29","Type":"ContainerStarted","Data":"e5f14c4b24472b82cd1df56e7356a410f3da64adf0620224d5771444a7a4a9b1"} Nov 23 04:23:53 crc kubenswrapper[4751]: I1123 04:23:53.975313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" event={"ID":"ddafc7c0-5c18-49f0-b609-f68959f5bc29","Type":"ContainerStarted","Data":"d5b39ed0db897b2be765d35ecf3bfd3af619011dffdee83a255f0f4c992e2e3a"} Nov 23 04:23:54 crc kubenswrapper[4751]: I1123 04:23:54.005123 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" podStartSLOduration=1.608081886 podStartE2EDuration="2.005103086s" podCreationTimestamp="2025-11-23 04:23:52 +0000 UTC" firstStartedPulling="2025-11-23 04:23:52.985376738 +0000 UTC m=+1729.179048097" lastFinishedPulling="2025-11-23 04:23:53.382397908 +0000 UTC m=+1729.576069297" observedRunningTime="2025-11-23 04:23:53.998480972 +0000 UTC m=+1730.192152401" watchObservedRunningTime="2025-11-23 04:23:54.005103086 +0000 UTC m=+1730.198774455" Nov 23 04:23:55 crc kubenswrapper[4751]: I1123 04:23:55.645261 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:23:55 crc kubenswrapper[4751]: E1123 04:23:55.645980 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:24:07 crc kubenswrapper[4751]: I1123 04:24:07.644423 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:24:07 crc kubenswrapper[4751]: E1123 04:24:07.645259 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:24:19 crc kubenswrapper[4751]: I1123 04:24:19.644952 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:24:19 crc kubenswrapper[4751]: E1123 04:24:19.645695 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:24:20 crc kubenswrapper[4751]: I1123 04:24:20.976125 4751 scope.go:117] "RemoveContainer" containerID="f9f18f775011501d6074511c7fcad3e5203a2418ea916e77f1cdcaf2e00a8fea" Nov 23 04:24:21 crc kubenswrapper[4751]: I1123 04:24:21.037713 4751 scope.go:117] "RemoveContainer" containerID="b732c3b3b170f5158fc963a3d9b0a0fcbe44c10c1d2d7a2497b4e3208d74b91f" Nov 23 04:24:21 crc kubenswrapper[4751]: I1123 04:24:21.078959 4751 scope.go:117] "RemoveContainer" containerID="1cf5c8a9a3b6863f35db543172a8eb9110d1dc866fdb98158681e857f373a699" Nov 23 04:24:29 crc kubenswrapper[4751]: I1123 04:24:29.052636 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wvmrc"] Nov 23 04:24:29 crc kubenswrapper[4751]: I1123 04:24:29.065279 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wvmrc"] Nov 23 04:24:30 crc kubenswrapper[4751]: I1123 04:24:30.644550 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:24:30 crc kubenswrapper[4751]: E1123 04:24:30.645206 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:24:30 crc kubenswrapper[4751]: I1123 04:24:30.663063 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe51cb37-a922-44f6-a214-22c763cee34c" path="/var/lib/kubelet/pods/fe51cb37-a922-44f6-a214-22c763cee34c/volumes" Nov 23 04:24:45 crc kubenswrapper[4751]: I1123 04:24:45.644682 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:24:45 crc kubenswrapper[4751]: E1123 04:24:45.645634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:24:51 crc kubenswrapper[4751]: I1123 04:24:51.610378 4751 generic.go:334] "Generic (PLEG): container finished" podID="ddafc7c0-5c18-49f0-b609-f68959f5bc29" containerID="e5f14c4b24472b82cd1df56e7356a410f3da64adf0620224d5771444a7a4a9b1" exitCode=0 Nov 23 04:24:51 crc kubenswrapper[4751]: I1123 04:24:51.610466 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" event={"ID":"ddafc7c0-5c18-49f0-b609-f68959f5bc29","Type":"ContainerDied","Data":"e5f14c4b24472b82cd1df56e7356a410f3da64adf0620224d5771444a7a4a9b1"} Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.132907 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.232307 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory\") pod \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.232473 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key\") pod \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.232649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx2ch\" (UniqueName: \"kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch\") pod \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\" (UID: \"ddafc7c0-5c18-49f0-b609-f68959f5bc29\") " Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.243594 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch" (OuterVolumeSpecName: "kube-api-access-mx2ch") pod "ddafc7c0-5c18-49f0-b609-f68959f5bc29" (UID: "ddafc7c0-5c18-49f0-b609-f68959f5bc29"). InnerVolumeSpecName "kube-api-access-mx2ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.258204 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ddafc7c0-5c18-49f0-b609-f68959f5bc29" (UID: "ddafc7c0-5c18-49f0-b609-f68959f5bc29"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.259582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory" (OuterVolumeSpecName: "inventory") pod "ddafc7c0-5c18-49f0-b609-f68959f5bc29" (UID: "ddafc7c0-5c18-49f0-b609-f68959f5bc29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.335331 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.335364 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ddafc7c0-5c18-49f0-b609-f68959f5bc29-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.335375 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx2ch\" (UniqueName: \"kubernetes.io/projected/ddafc7c0-5c18-49f0-b609-f68959f5bc29-kube-api-access-mx2ch\") on node \"crc\" DevicePath \"\"" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.634187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" event={"ID":"ddafc7c0-5c18-49f0-b609-f68959f5bc29","Type":"ContainerDied","Data":"d5b39ed0db897b2be765d35ecf3bfd3af619011dffdee83a255f0f4c992e2e3a"} Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.634233 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b39ed0db897b2be765d35ecf3bfd3af619011dffdee83a255f0f4c992e2e3a" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.634296 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.705462 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cnpr"] Nov 23 04:24:53 crc kubenswrapper[4751]: E1123 04:24:53.705855 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddafc7c0-5c18-49f0-b609-f68959f5bc29" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.705876 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddafc7c0-5c18-49f0-b609-f68959f5bc29" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.706138 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddafc7c0-5c18-49f0-b609-f68959f5bc29" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.706887 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.709511 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.709578 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.710034 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.711648 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.716207 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cnpr"] Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.844201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.844300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.844415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvhp\" (UniqueName: \"kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.946254 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.946482 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.946701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqvhp\" (UniqueName: \"kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.953325 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.954079 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:53 crc kubenswrapper[4751]: I1123 04:24:53.978605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqvhp\" (UniqueName: \"kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp\") pod \"ssh-known-hosts-edpm-deployment-9cnpr\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:54 crc kubenswrapper[4751]: I1123 04:24:54.025119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:24:54 crc kubenswrapper[4751]: I1123 04:24:54.621192 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9cnpr"] Nov 23 04:24:54 crc kubenswrapper[4751]: I1123 04:24:54.643252 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" event={"ID":"53cbfe3d-8559-41aa-8352-5480c56e3624","Type":"ContainerStarted","Data":"e8feecf8a5987e3375a97fb542903a4b08b716c8b95691123c43640fbcb8a9fe"} Nov 23 04:24:55 crc kubenswrapper[4751]: I1123 04:24:55.672053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" event={"ID":"53cbfe3d-8559-41aa-8352-5480c56e3624","Type":"ContainerStarted","Data":"47878c33dbbcbe6541e33bfb6c13def450742455e40f462de4528e8e383b3455"} Nov 23 04:24:55 crc kubenswrapper[4751]: I1123 04:24:55.699425 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" podStartSLOduration=2.276144113 podStartE2EDuration="2.699403561s" podCreationTimestamp="2025-11-23 04:24:53 +0000 UTC" firstStartedPulling="2025-11-23 04:24:54.625988927 +0000 UTC m=+1790.819660286" lastFinishedPulling="2025-11-23 04:24:55.049248335 +0000 UTC m=+1791.242919734" observedRunningTime="2025-11-23 04:24:55.692364377 +0000 UTC m=+1791.886035726" watchObservedRunningTime="2025-11-23 04:24:55.699403561 +0000 UTC m=+1791.893074950" Nov 23 04:24:59 crc kubenswrapper[4751]: I1123 04:24:59.645145 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:24:59 crc kubenswrapper[4751]: E1123 04:24:59.646015 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:25:03 crc kubenswrapper[4751]: I1123 04:25:03.762508 4751 generic.go:334] "Generic (PLEG): container finished" podID="53cbfe3d-8559-41aa-8352-5480c56e3624" containerID="47878c33dbbcbe6541e33bfb6c13def450742455e40f462de4528e8e383b3455" exitCode=0 Nov 23 04:25:03 crc kubenswrapper[4751]: I1123 04:25:03.762557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" event={"ID":"53cbfe3d-8559-41aa-8352-5480c56e3624","Type":"ContainerDied","Data":"47878c33dbbcbe6541e33bfb6c13def450742455e40f462de4528e8e383b3455"} Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.206083 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.292366 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam\") pod \"53cbfe3d-8559-41aa-8352-5480c56e3624\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.292473 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqvhp\" (UniqueName: \"kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp\") pod \"53cbfe3d-8559-41aa-8352-5480c56e3624\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.292742 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0\") pod \"53cbfe3d-8559-41aa-8352-5480c56e3624\" (UID: \"53cbfe3d-8559-41aa-8352-5480c56e3624\") " Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.298553 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp" (OuterVolumeSpecName: "kube-api-access-fqvhp") pod "53cbfe3d-8559-41aa-8352-5480c56e3624" (UID: "53cbfe3d-8559-41aa-8352-5480c56e3624"). InnerVolumeSpecName "kube-api-access-fqvhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.318919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "53cbfe3d-8559-41aa-8352-5480c56e3624" (UID: "53cbfe3d-8559-41aa-8352-5480c56e3624"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.319550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "53cbfe3d-8559-41aa-8352-5480c56e3624" (UID: "53cbfe3d-8559-41aa-8352-5480c56e3624"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.395616 4751 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.395661 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/53cbfe3d-8559-41aa-8352-5480c56e3624-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.395676 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqvhp\" (UniqueName: \"kubernetes.io/projected/53cbfe3d-8559-41aa-8352-5480c56e3624-kube-api-access-fqvhp\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.783886 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" event={"ID":"53cbfe3d-8559-41aa-8352-5480c56e3624","Type":"ContainerDied","Data":"e8feecf8a5987e3375a97fb542903a4b08b716c8b95691123c43640fbcb8a9fe"} Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.783935 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9cnpr" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.783947 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8feecf8a5987e3375a97fb542903a4b08b716c8b95691123c43640fbcb8a9fe" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.874606 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l"] Nov 23 04:25:05 crc kubenswrapper[4751]: E1123 04:25:05.875194 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cbfe3d-8559-41aa-8352-5480c56e3624" containerName="ssh-known-hosts-edpm-deployment" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.875218 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cbfe3d-8559-41aa-8352-5480c56e3624" containerName="ssh-known-hosts-edpm-deployment" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.875555 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cbfe3d-8559-41aa-8352-5480c56e3624" containerName="ssh-known-hosts-edpm-deployment" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.876584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.878962 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.879452 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.879640 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.879762 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:25:05 crc kubenswrapper[4751]: I1123 04:25:05.885591 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l"] Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.008576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwlkr\" (UniqueName: \"kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.008697 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.009021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.110655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwlkr\" (UniqueName: \"kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.111273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.111511 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.120535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.121325 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.146226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwlkr\" (UniqueName: \"kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-p922l\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.217559 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.667542 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l"] Nov 23 04:25:06 crc kubenswrapper[4751]: I1123 04:25:06.798007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" event={"ID":"98681550-3696-4f63-a16d-edaf78bf06fb","Type":"ContainerStarted","Data":"78c8f94e660609b85cd90fa8f849850df3d1453691bd01d8f0226e50b2d7555d"} Nov 23 04:25:07 crc kubenswrapper[4751]: I1123 04:25:07.811694 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" event={"ID":"98681550-3696-4f63-a16d-edaf78bf06fb","Type":"ContainerStarted","Data":"ac4752969a443bac40cd60e01b73958e74aa1060e22eb52a4b134547f2416511"} Nov 23 04:25:07 crc kubenswrapper[4751]: I1123 04:25:07.838829 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" podStartSLOduration=2.320632226 podStartE2EDuration="2.838802813s" podCreationTimestamp="2025-11-23 04:25:05 +0000 UTC" firstStartedPulling="2025-11-23 04:25:06.671446724 +0000 UTC m=+1802.865118083" lastFinishedPulling="2025-11-23 04:25:07.189617271 +0000 UTC m=+1803.383288670" observedRunningTime="2025-11-23 04:25:07.829266433 +0000 UTC m=+1804.022937802" watchObservedRunningTime="2025-11-23 04:25:07.838802813 +0000 UTC m=+1804.032474202" Nov 23 04:25:13 crc kubenswrapper[4751]: I1123 04:25:13.644745 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:25:13 crc kubenswrapper[4751]: I1123 04:25:13.884742 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7"} Nov 23 04:25:16 crc kubenswrapper[4751]: I1123 04:25:16.916907 4751 generic.go:334] "Generic (PLEG): container finished" podID="98681550-3696-4f63-a16d-edaf78bf06fb" containerID="ac4752969a443bac40cd60e01b73958e74aa1060e22eb52a4b134547f2416511" exitCode=0 Nov 23 04:25:16 crc kubenswrapper[4751]: I1123 04:25:16.916998 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" event={"ID":"98681550-3696-4f63-a16d-edaf78bf06fb","Type":"ContainerDied","Data":"ac4752969a443bac40cd60e01b73958e74aa1060e22eb52a4b134547f2416511"} Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.400895 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.419813 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key\") pod \"98681550-3696-4f63-a16d-edaf78bf06fb\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.419890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory\") pod \"98681550-3696-4f63-a16d-edaf78bf06fb\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.420042 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwlkr\" (UniqueName: \"kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr\") pod \"98681550-3696-4f63-a16d-edaf78bf06fb\" (UID: \"98681550-3696-4f63-a16d-edaf78bf06fb\") " Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.426483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr" (OuterVolumeSpecName: "kube-api-access-cwlkr") pod "98681550-3696-4f63-a16d-edaf78bf06fb" (UID: "98681550-3696-4f63-a16d-edaf78bf06fb"). InnerVolumeSpecName "kube-api-access-cwlkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.470868 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory" (OuterVolumeSpecName: "inventory") pod "98681550-3696-4f63-a16d-edaf78bf06fb" (UID: "98681550-3696-4f63-a16d-edaf78bf06fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.473607 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "98681550-3696-4f63-a16d-edaf78bf06fb" (UID: "98681550-3696-4f63-a16d-edaf78bf06fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.522001 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.522033 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98681550-3696-4f63-a16d-edaf78bf06fb-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.522043 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwlkr\" (UniqueName: \"kubernetes.io/projected/98681550-3696-4f63-a16d-edaf78bf06fb-kube-api-access-cwlkr\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.948575 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" event={"ID":"98681550-3696-4f63-a16d-edaf78bf06fb","Type":"ContainerDied","Data":"78c8f94e660609b85cd90fa8f849850df3d1453691bd01d8f0226e50b2d7555d"} Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.948639 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c8f94e660609b85cd90fa8f849850df3d1453691bd01d8f0226e50b2d7555d" Nov 23 04:25:18 crc kubenswrapper[4751]: I1123 04:25:18.948707 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-p922l" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.094143 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg"] Nov 23 04:25:19 crc kubenswrapper[4751]: E1123 04:25:19.094597 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98681550-3696-4f63-a16d-edaf78bf06fb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.094614 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="98681550-3696-4f63-a16d-edaf78bf06fb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.094797 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="98681550-3696-4f63-a16d-edaf78bf06fb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.095405 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.098201 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.098396 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.100580 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.100773 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.112230 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg"] Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.142005 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.142087 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.142132 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9ql\" (UniqueName: \"kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.243422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.243483 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.243509 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9ql\" (UniqueName: \"kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.248975 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.249026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.262771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9ql\" (UniqueName: \"kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:19 crc kubenswrapper[4751]: I1123 04:25:19.414338 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:20 crc kubenswrapper[4751]: I1123 04:25:20.001550 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg"] Nov 23 04:25:20 crc kubenswrapper[4751]: I1123 04:25:20.978165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" event={"ID":"be595bce-317a-48c8-949e-2947f0954d0b","Type":"ContainerStarted","Data":"583f0da9b5d68051342b3e8edc7006265d9501b330577472351c1ef48666d97d"} Nov 23 04:25:20 crc kubenswrapper[4751]: I1123 04:25:20.978234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" event={"ID":"be595bce-317a-48c8-949e-2947f0954d0b","Type":"ContainerStarted","Data":"9e46113c60b90f412b35529d2af9a939f1de9f4ea4f8ec4e2c70fc1cc33cc89c"} Nov 23 04:25:21 crc kubenswrapper[4751]: I1123 04:25:21.196099 4751 scope.go:117] "RemoveContainer" containerID="78faa58eb794c18d8e9b698ba99599bca266c8c1e9ac12ee2e4ef93603740fad" Nov 23 04:25:31 crc kubenswrapper[4751]: I1123 04:25:31.078721 4751 generic.go:334] "Generic (PLEG): container finished" podID="be595bce-317a-48c8-949e-2947f0954d0b" containerID="583f0da9b5d68051342b3e8edc7006265d9501b330577472351c1ef48666d97d" exitCode=0 Nov 23 04:25:31 crc kubenswrapper[4751]: I1123 04:25:31.078801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" event={"ID":"be595bce-317a-48c8-949e-2947f0954d0b","Type":"ContainerDied","Data":"583f0da9b5d68051342b3e8edc7006265d9501b330577472351c1ef48666d97d"} Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.616963 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.819213 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory\") pod \"be595bce-317a-48c8-949e-2947f0954d0b\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.819298 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key\") pod \"be595bce-317a-48c8-949e-2947f0954d0b\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.819387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c9ql\" (UniqueName: \"kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql\") pod \"be595bce-317a-48c8-949e-2947f0954d0b\" (UID: \"be595bce-317a-48c8-949e-2947f0954d0b\") " Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.824887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql" (OuterVolumeSpecName: "kube-api-access-6c9ql") pod "be595bce-317a-48c8-949e-2947f0954d0b" (UID: "be595bce-317a-48c8-949e-2947f0954d0b"). InnerVolumeSpecName "kube-api-access-6c9ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.850525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory" (OuterVolumeSpecName: "inventory") pod "be595bce-317a-48c8-949e-2947f0954d0b" (UID: "be595bce-317a-48c8-949e-2947f0954d0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.869295 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be595bce-317a-48c8-949e-2947f0954d0b" (UID: "be595bce-317a-48c8-949e-2947f0954d0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.926119 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.926163 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be595bce-317a-48c8-949e-2947f0954d0b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:32 crc kubenswrapper[4751]: I1123 04:25:32.926177 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c9ql\" (UniqueName: \"kubernetes.io/projected/be595bce-317a-48c8-949e-2947f0954d0b-kube-api-access-6c9ql\") on node \"crc\" DevicePath \"\"" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.101683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" event={"ID":"be595bce-317a-48c8-949e-2947f0954d0b","Type":"ContainerDied","Data":"9e46113c60b90f412b35529d2af9a939f1de9f4ea4f8ec4e2c70fc1cc33cc89c"} Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.101731 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e46113c60b90f412b35529d2af9a939f1de9f4ea4f8ec4e2c70fc1cc33cc89c" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.101794 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.214452 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf"] Nov 23 04:25:33 crc kubenswrapper[4751]: E1123 04:25:33.214883 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be595bce-317a-48c8-949e-2947f0954d0b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.214900 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="be595bce-317a-48c8-949e-2947f0954d0b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.215132 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="be595bce-317a-48c8-949e-2947f0954d0b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.215843 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.219640 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.220615 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.221765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.222883 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.223170 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.223771 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.224074 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.232421 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233450 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233551 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233594 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233711 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233867 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7zjx\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233961 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.233992 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.234060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.234109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.249953 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf"] Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336211 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7zjx\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336325 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336380 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336406 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336555 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336605 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336695 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336717 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336745 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336810 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.336838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.342234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.342315 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.342333 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.343643 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.343838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.344181 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.345754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.348019 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.348034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.348063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.348115 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.351718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.355785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.358070 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7zjx\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.559013 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:25:33 crc kubenswrapper[4751]: I1123 04:25:33.942754 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf"] Nov 23 04:25:34 crc kubenswrapper[4751]: I1123 04:25:34.117232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" event={"ID":"3068980f-3607-43b3-b505-d4663202d8dd","Type":"ContainerStarted","Data":"3fda44c6b193049b80e4554abc4b10310d5e4ad0c6298ed9b25952ddb8baa0d3"} Nov 23 04:25:35 crc kubenswrapper[4751]: I1123 04:25:35.130955 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" event={"ID":"3068980f-3607-43b3-b505-d4663202d8dd","Type":"ContainerStarted","Data":"2a0963ea145cae23b307ecff6e5dc2d50db49056d6445aafcb9efef830ebb9e7"} Nov 23 04:25:35 crc kubenswrapper[4751]: I1123 04:25:35.167526 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" podStartSLOduration=1.758563613 podStartE2EDuration="2.167496136s" podCreationTimestamp="2025-11-23 04:25:33 +0000 UTC" firstStartedPulling="2025-11-23 04:25:33.944514179 +0000 UTC m=+1830.138185578" lastFinishedPulling="2025-11-23 04:25:34.353446742 +0000 UTC m=+1830.547118101" observedRunningTime="2025-11-23 04:25:35.163211914 +0000 UTC m=+1831.356883323" watchObservedRunningTime="2025-11-23 04:25:35.167496136 +0000 UTC m=+1831.361167535" Nov 23 04:26:18 crc kubenswrapper[4751]: I1123 04:26:18.625842 4751 generic.go:334] "Generic (PLEG): container finished" podID="3068980f-3607-43b3-b505-d4663202d8dd" containerID="2a0963ea145cae23b307ecff6e5dc2d50db49056d6445aafcb9efef830ebb9e7" exitCode=0 Nov 23 04:26:18 crc kubenswrapper[4751]: I1123 04:26:18.625931 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" event={"ID":"3068980f-3607-43b3-b505-d4663202d8dd","Type":"ContainerDied","Data":"2a0963ea145cae23b307ecff6e5dc2d50db49056d6445aafcb9efef830ebb9e7"} Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.101256 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301332 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7zjx\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301384 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301411 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301479 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301528 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301572 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301641 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301701 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.301758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle\") pod \"3068980f-3607-43b3-b505-d4663202d8dd\" (UID: \"3068980f-3607-43b3-b505-d4663202d8dd\") " Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.309125 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.309832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.309882 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.309968 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.309999 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.311550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.311671 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx" (OuterVolumeSpecName: "kube-api-access-h7zjx") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "kube-api-access-h7zjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.312154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.313121 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.313775 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.327765 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.327937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.353121 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.354949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory" (OuterVolumeSpecName: "inventory") pod "3068980f-3607-43b3-b505-d4663202d8dd" (UID: "3068980f-3607-43b3-b505-d4663202d8dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404539 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404586 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404601 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7zjx\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-kube-api-access-h7zjx\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404614 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404630 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404643 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404657 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404669 4751 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404681 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404693 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3068980f-3607-43b3-b505-d4663202d8dd-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404705 4751 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404717 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404757 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.404769 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3068980f-3607-43b3-b505-d4663202d8dd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.657978 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.664185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf" event={"ID":"3068980f-3607-43b3-b505-d4663202d8dd","Type":"ContainerDied","Data":"3fda44c6b193049b80e4554abc4b10310d5e4ad0c6298ed9b25952ddb8baa0d3"} Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.664243 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fda44c6b193049b80e4554abc4b10310d5e4ad0c6298ed9b25952ddb8baa0d3" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.821989 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr"] Nov 23 04:26:20 crc kubenswrapper[4751]: E1123 04:26:20.823312 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3068980f-3607-43b3-b505-d4663202d8dd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.823336 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3068980f-3607-43b3-b505-d4663202d8dd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.823567 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3068980f-3607-43b3-b505-d4663202d8dd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.824404 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.828964 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.829171 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.829487 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.829642 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.832212 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:26:20 crc kubenswrapper[4751]: I1123 04:26:20.856619 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr"] Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.015764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98sss\" (UniqueName: \"kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.015929 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.016018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.016109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.016198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.117756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98sss\" (UniqueName: \"kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.117838 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.117892 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.117940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.117990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.120494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.128754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.137242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.138023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.142218 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98sss\" (UniqueName: \"kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5q5kr\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.156887 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:26:21 crc kubenswrapper[4751]: I1123 04:26:21.768145 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr"] Nov 23 04:26:21 crc kubenswrapper[4751]: W1123 04:26:21.774193 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd88992f_1e56_48ed_913c_4ecd0fc20767.slice/crio-f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593 WatchSource:0}: Error finding container f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593: Status 404 returned error can't find the container with id f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593 Nov 23 04:26:22 crc kubenswrapper[4751]: I1123 04:26:22.674741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" event={"ID":"dd88992f-1e56-48ed-913c-4ecd0fc20767","Type":"ContainerStarted","Data":"34223c5e6e2fd4e3088c683b0c088ab27ecc1079039f999b3461d20501c2e66a"} Nov 23 04:26:22 crc kubenswrapper[4751]: I1123 04:26:22.675170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" event={"ID":"dd88992f-1e56-48ed-913c-4ecd0fc20767","Type":"ContainerStarted","Data":"f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593"} Nov 23 04:26:22 crc kubenswrapper[4751]: I1123 04:26:22.697704 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" podStartSLOduration=2.196885962 podStartE2EDuration="2.697685713s" podCreationTimestamp="2025-11-23 04:26:20 +0000 UTC" firstStartedPulling="2025-11-23 04:26:21.777025182 +0000 UTC m=+1877.970696541" lastFinishedPulling="2025-11-23 04:26:22.277824923 +0000 UTC m=+1878.471496292" observedRunningTime="2025-11-23 04:26:22.695744102 +0000 UTC m=+1878.889415491" watchObservedRunningTime="2025-11-23 04:26:22.697685713 +0000 UTC m=+1878.891357082" Nov 23 04:26:40 crc kubenswrapper[4751]: I1123 04:26:40.950233 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:40 crc kubenswrapper[4751]: I1123 04:26:40.953945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:40 crc kubenswrapper[4751]: I1123 04:26:40.960270 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.067733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.068047 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs47n\" (UniqueName: \"kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.068123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.170140 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs47n\" (UniqueName: \"kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.170236 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.170348 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.170972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.171025 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.196429 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs47n\" (UniqueName: \"kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n\") pod \"community-operators-48ljb\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.279940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.738188 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:41 crc kubenswrapper[4751]: W1123 04:26:41.756660 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc952ef52_6869_4308_9d40_a2691187f580.slice/crio-6833f29b2687a95934b70d7084e3adcc1e457be958cb883a760e8a8fef09b65c WatchSource:0}: Error finding container 6833f29b2687a95934b70d7084e3adcc1e457be958cb883a760e8a8fef09b65c: Status 404 returned error can't find the container with id 6833f29b2687a95934b70d7084e3adcc1e457be958cb883a760e8a8fef09b65c Nov 23 04:26:41 crc kubenswrapper[4751]: I1123 04:26:41.865191 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerStarted","Data":"6833f29b2687a95934b70d7084e3adcc1e457be958cb883a760e8a8fef09b65c"} Nov 23 04:26:42 crc kubenswrapper[4751]: I1123 04:26:42.875638 4751 generic.go:334] "Generic (PLEG): container finished" podID="c952ef52-6869-4308-9d40-a2691187f580" containerID="409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052" exitCode=0 Nov 23 04:26:42 crc kubenswrapper[4751]: I1123 04:26:42.875732 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerDied","Data":"409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052"} Nov 23 04:26:42 crc kubenswrapper[4751]: I1123 04:26:42.878052 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:26:43 crc kubenswrapper[4751]: I1123 04:26:43.885340 4751 generic.go:334] "Generic (PLEG): container finished" podID="c952ef52-6869-4308-9d40-a2691187f580" containerID="52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010" exitCode=0 Nov 23 04:26:43 crc kubenswrapper[4751]: I1123 04:26:43.885466 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerDied","Data":"52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010"} Nov 23 04:26:44 crc kubenswrapper[4751]: I1123 04:26:44.897508 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerStarted","Data":"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49"} Nov 23 04:26:44 crc kubenswrapper[4751]: I1123 04:26:44.933672 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48ljb" podStartSLOduration=3.504867955 podStartE2EDuration="4.933646488s" podCreationTimestamp="2025-11-23 04:26:40 +0000 UTC" firstStartedPulling="2025-11-23 04:26:42.87777454 +0000 UTC m=+1899.071445899" lastFinishedPulling="2025-11-23 04:26:44.306553063 +0000 UTC m=+1900.500224432" observedRunningTime="2025-11-23 04:26:44.920040821 +0000 UTC m=+1901.113712220" watchObservedRunningTime="2025-11-23 04:26:44.933646488 +0000 UTC m=+1901.127317857" Nov 23 04:26:51 crc kubenswrapper[4751]: I1123 04:26:51.281024 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:51 crc kubenswrapper[4751]: I1123 04:26:51.281805 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:51 crc kubenswrapper[4751]: I1123 04:26:51.340612 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:52 crc kubenswrapper[4751]: I1123 04:26:52.056574 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:52 crc kubenswrapper[4751]: I1123 04:26:52.125605 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:53 crc kubenswrapper[4751]: I1123 04:26:53.994771 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-48ljb" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="registry-server" containerID="cri-o://3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49" gracePeriod=2 Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.501730 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.570986 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities\") pod \"c952ef52-6869-4308-9d40-a2691187f580\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.571129 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content\") pod \"c952ef52-6869-4308-9d40-a2691187f580\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.571286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs47n\" (UniqueName: \"kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n\") pod \"c952ef52-6869-4308-9d40-a2691187f580\" (UID: \"c952ef52-6869-4308-9d40-a2691187f580\") " Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.571891 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities" (OuterVolumeSpecName: "utilities") pod "c952ef52-6869-4308-9d40-a2691187f580" (UID: "c952ef52-6869-4308-9d40-a2691187f580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.576273 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n" (OuterVolumeSpecName: "kube-api-access-zs47n") pod "c952ef52-6869-4308-9d40-a2691187f580" (UID: "c952ef52-6869-4308-9d40-a2691187f580"). InnerVolumeSpecName "kube-api-access-zs47n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.625040 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c952ef52-6869-4308-9d40-a2691187f580" (UID: "c952ef52-6869-4308-9d40-a2691187f580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.674568 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.674596 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c952ef52-6869-4308-9d40-a2691187f580-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:54 crc kubenswrapper[4751]: I1123 04:26:54.674608 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs47n\" (UniqueName: \"kubernetes.io/projected/c952ef52-6869-4308-9d40-a2691187f580-kube-api-access-zs47n\") on node \"crc\" DevicePath \"\"" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.007951 4751 generic.go:334] "Generic (PLEG): container finished" podID="c952ef52-6869-4308-9d40-a2691187f580" containerID="3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49" exitCode=0 Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.008020 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48ljb" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.008020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerDied","Data":"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49"} Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.008087 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48ljb" event={"ID":"c952ef52-6869-4308-9d40-a2691187f580","Type":"ContainerDied","Data":"6833f29b2687a95934b70d7084e3adcc1e457be958cb883a760e8a8fef09b65c"} Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.008117 4751 scope.go:117] "RemoveContainer" containerID="3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.036969 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.043335 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-48ljb"] Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.053183 4751 scope.go:117] "RemoveContainer" containerID="52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.081661 4751 scope.go:117] "RemoveContainer" containerID="409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.125841 4751 scope.go:117] "RemoveContainer" containerID="3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49" Nov 23 04:26:55 crc kubenswrapper[4751]: E1123 04:26:55.126519 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49\": container with ID starting with 3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49 not found: ID does not exist" containerID="3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.126611 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49"} err="failed to get container status \"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49\": rpc error: code = NotFound desc = could not find container \"3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49\": container with ID starting with 3a8d3ccca58aa45b403baa77c549734ed592a5c488ed6363740029c8f50d0f49 not found: ID does not exist" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.126646 4751 scope.go:117] "RemoveContainer" containerID="52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010" Nov 23 04:26:55 crc kubenswrapper[4751]: E1123 04:26:55.127150 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010\": container with ID starting with 52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010 not found: ID does not exist" containerID="52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.127203 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010"} err="failed to get container status \"52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010\": rpc error: code = NotFound desc = could not find container \"52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010\": container with ID starting with 52a34876c48652722e78f53f48d98c57a73e0f22bbd6206668c16dd75dc0e010 not found: ID does not exist" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.127235 4751 scope.go:117] "RemoveContainer" containerID="409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052" Nov 23 04:26:55 crc kubenswrapper[4751]: E1123 04:26:55.127555 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052\": container with ID starting with 409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052 not found: ID does not exist" containerID="409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052" Nov 23 04:26:55 crc kubenswrapper[4751]: I1123 04:26:55.127587 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052"} err="failed to get container status \"409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052\": rpc error: code = NotFound desc = could not find container \"409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052\": container with ID starting with 409ae58add123038e5acccdcdd8f519fe6084625ff547def1a26e07478409052 not found: ID does not exist" Nov 23 04:26:56 crc kubenswrapper[4751]: I1123 04:26:56.654458 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c952ef52-6869-4308-9d40-a2691187f580" path="/var/lib/kubelet/pods/c952ef52-6869-4308-9d40-a2691187f580/volumes" Nov 23 04:27:32 crc kubenswrapper[4751]: E1123 04:27:32.179394 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd88992f_1e56_48ed_913c_4ecd0fc20767.slice/crio-conmon-34223c5e6e2fd4e3088c683b0c088ab27ecc1079039f999b3461d20501c2e66a.scope\": RecentStats: unable to find data in memory cache]" Nov 23 04:27:32 crc kubenswrapper[4751]: I1123 04:27:32.417001 4751 generic.go:334] "Generic (PLEG): container finished" podID="dd88992f-1e56-48ed-913c-4ecd0fc20767" containerID="34223c5e6e2fd4e3088c683b0c088ab27ecc1079039f999b3461d20501c2e66a" exitCode=0 Nov 23 04:27:32 crc kubenswrapper[4751]: I1123 04:27:32.417123 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" event={"ID":"dd88992f-1e56-48ed-913c-4ecd0fc20767","Type":"ContainerDied","Data":"34223c5e6e2fd4e3088c683b0c088ab27ecc1079039f999b3461d20501c2e66a"} Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.925717 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.972376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0\") pod \"dd88992f-1e56-48ed-913c-4ecd0fc20767\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.972464 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle\") pod \"dd88992f-1e56-48ed-913c-4ecd0fc20767\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.972613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory\") pod \"dd88992f-1e56-48ed-913c-4ecd0fc20767\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.972664 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key\") pod \"dd88992f-1e56-48ed-913c-4ecd0fc20767\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.972722 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98sss\" (UniqueName: \"kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss\") pod \"dd88992f-1e56-48ed-913c-4ecd0fc20767\" (UID: \"dd88992f-1e56-48ed-913c-4ecd0fc20767\") " Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.978091 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "dd88992f-1e56-48ed-913c-4ecd0fc20767" (UID: "dd88992f-1e56-48ed-913c-4ecd0fc20767"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:27:33 crc kubenswrapper[4751]: I1123 04:27:33.978630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss" (OuterVolumeSpecName: "kube-api-access-98sss") pod "dd88992f-1e56-48ed-913c-4ecd0fc20767" (UID: "dd88992f-1e56-48ed-913c-4ecd0fc20767"). InnerVolumeSpecName "kube-api-access-98sss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.009551 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dd88992f-1e56-48ed-913c-4ecd0fc20767" (UID: "dd88992f-1e56-48ed-913c-4ecd0fc20767"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.022258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory" (OuterVolumeSpecName: "inventory") pod "dd88992f-1e56-48ed-913c-4ecd0fc20767" (UID: "dd88992f-1e56-48ed-913c-4ecd0fc20767"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.025653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "dd88992f-1e56-48ed-913c-4ecd0fc20767" (UID: "dd88992f-1e56-48ed-913c-4ecd0fc20767"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.077081 4751 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.077128 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.077146 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.077161 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd88992f-1e56-48ed-913c-4ecd0fc20767-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.077179 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98sss\" (UniqueName: \"kubernetes.io/projected/dd88992f-1e56-48ed-913c-4ecd0fc20767-kube-api-access-98sss\") on node \"crc\" DevicePath \"\"" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.482303 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" event={"ID":"dd88992f-1e56-48ed-913c-4ecd0fc20767","Type":"ContainerDied","Data":"f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593"} Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.482754 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37fd6f844ad08c32a67c75b9441f456ff1054746cf273ec86168aa7612d2593" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.482426 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5q5kr" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.588701 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9"] Nov 23 04:27:34 crc kubenswrapper[4751]: E1123 04:27:34.589196 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="registry-server" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589221 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="registry-server" Nov 23 04:27:34 crc kubenswrapper[4751]: E1123 04:27:34.589241 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="extract-content" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589250 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="extract-content" Nov 23 04:27:34 crc kubenswrapper[4751]: E1123 04:27:34.589286 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="extract-utilities" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589296 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="extract-utilities" Nov 23 04:27:34 crc kubenswrapper[4751]: E1123 04:27:34.589310 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd88992f-1e56-48ed-913c-4ecd0fc20767" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589320 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd88992f-1e56-48ed-913c-4ecd0fc20767" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589827 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c952ef52-6869-4308-9d40-a2691187f580" containerName="registry-server" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.589869 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd88992f-1e56-48ed-913c-4ecd0fc20767" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.590739 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.594232 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.594484 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.594611 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.595363 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.595413 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.596499 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.613398 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9"] Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.688133 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg7jn\" (UniqueName: \"kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.688224 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.688545 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.688821 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.688936 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.689247 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.790999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7jn\" (UniqueName: \"kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.791150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.791278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.791468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.791571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.791756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.799299 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.799722 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.799904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.800907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.811906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.828999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7jn\" (UniqueName: \"kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:34 crc kubenswrapper[4751]: I1123 04:27:34.920458 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:27:35 crc kubenswrapper[4751]: I1123 04:27:35.285436 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9"] Nov 23 04:27:35 crc kubenswrapper[4751]: I1123 04:27:35.493833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" event={"ID":"e8f0c75e-2965-4ab3-841c-aae06611df5a","Type":"ContainerStarted","Data":"de3d4b852b74cbfc1bd6de0605d33e85cd58718a9d17b8fb465ca7b2204e81d0"} Nov 23 04:27:36 crc kubenswrapper[4751]: I1123 04:27:36.505646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" event={"ID":"e8f0c75e-2965-4ab3-841c-aae06611df5a","Type":"ContainerStarted","Data":"d5d891440b8f22f133516b657bb65f1fe478a466b1abcbaa22116428d17de226"} Nov 23 04:27:36 crc kubenswrapper[4751]: I1123 04:27:36.533911 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" podStartSLOduration=1.919778421 podStartE2EDuration="2.533887935s" podCreationTimestamp="2025-11-23 04:27:34 +0000 UTC" firstStartedPulling="2025-11-23 04:27:35.292966854 +0000 UTC m=+1951.486638213" lastFinishedPulling="2025-11-23 04:27:35.907076368 +0000 UTC m=+1952.100747727" observedRunningTime="2025-11-23 04:27:36.529643404 +0000 UTC m=+1952.723314783" watchObservedRunningTime="2025-11-23 04:27:36.533887935 +0000 UTC m=+1952.727559314" Nov 23 04:27:38 crc kubenswrapper[4751]: I1123 04:27:38.115098 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:27:38 crc kubenswrapper[4751]: I1123 04:27:38.115660 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:28:08 crc kubenswrapper[4751]: I1123 04:28:08.114576 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:28:08 crc kubenswrapper[4751]: I1123 04:28:08.115300 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:28:29 crc kubenswrapper[4751]: I1123 04:28:29.063417 4751 generic.go:334] "Generic (PLEG): container finished" podID="e8f0c75e-2965-4ab3-841c-aae06611df5a" containerID="d5d891440b8f22f133516b657bb65f1fe478a466b1abcbaa22116428d17de226" exitCode=0 Nov 23 04:28:29 crc kubenswrapper[4751]: I1123 04:28:29.063490 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" event={"ID":"e8f0c75e-2965-4ab3-841c-aae06611df5a","Type":"ContainerDied","Data":"d5d891440b8f22f133516b657bb65f1fe478a466b1abcbaa22116428d17de226"} Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.509545 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.615525 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.615702 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.615786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.615848 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg7jn\" (UniqueName: \"kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.616017 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.616110 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0\") pod \"e8f0c75e-2965-4ab3-841c-aae06611df5a\" (UID: \"e8f0c75e-2965-4ab3-841c-aae06611df5a\") " Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.622162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.623865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn" (OuterVolumeSpecName: "kube-api-access-qg7jn") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "kube-api-access-qg7jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.653691 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.657627 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.658497 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.663009 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory" (OuterVolumeSpecName: "inventory") pod "e8f0c75e-2965-4ab3-841c-aae06611df5a" (UID: "e8f0c75e-2965-4ab3-841c-aae06611df5a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718701 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718751 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718761 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718771 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg7jn\" (UniqueName: \"kubernetes.io/projected/e8f0c75e-2965-4ab3-841c-aae06611df5a-kube-api-access-qg7jn\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718780 4751 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:30 crc kubenswrapper[4751]: I1123 04:28:30.718789 4751 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e8f0c75e-2965-4ab3-841c-aae06611df5a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.086950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" event={"ID":"e8f0c75e-2965-4ab3-841c-aae06611df5a","Type":"ContainerDied","Data":"de3d4b852b74cbfc1bd6de0605d33e85cd58718a9d17b8fb465ca7b2204e81d0"} Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.087442 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3d4b852b74cbfc1bd6de0605d33e85cd58718a9d17b8fb465ca7b2204e81d0" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.086969 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.209907 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn"] Nov 23 04:28:31 crc kubenswrapper[4751]: E1123 04:28:31.210303 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f0c75e-2965-4ab3-841c-aae06611df5a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.210321 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f0c75e-2965-4ab3-841c-aae06611df5a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.210575 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f0c75e-2965-4ab3-841c-aae06611df5a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.211217 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.214304 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.215013 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.216402 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.216853 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.226281 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.232600 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn"] Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.250971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.251101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.251219 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.251262 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6899\" (UniqueName: \"kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.251324 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.353150 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.353327 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.353459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.353515 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6899\" (UniqueName: \"kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.353614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.357372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.357424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.359999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.370430 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.376584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6899\" (UniqueName: \"kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-44fjn\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:31 crc kubenswrapper[4751]: I1123 04:28:31.529333 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:28:32 crc kubenswrapper[4751]: I1123 04:28:32.152709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn"] Nov 23 04:28:32 crc kubenswrapper[4751]: W1123 04:28:32.165767 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b8004d_68f3_41c1_ac68_2b35a527fd88.slice/crio-87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19 WatchSource:0}: Error finding container 87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19: Status 404 returned error can't find the container with id 87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19 Nov 23 04:28:33 crc kubenswrapper[4751]: I1123 04:28:33.113092 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" event={"ID":"b1b8004d-68f3-41c1-ac68-2b35a527fd88","Type":"ContainerStarted","Data":"5d70de436aae7b96a8d3b2f1bc0571051f2429e501fe3f0139d9cafc345afb31"} Nov 23 04:28:33 crc kubenswrapper[4751]: I1123 04:28:33.113539 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" event={"ID":"b1b8004d-68f3-41c1-ac68-2b35a527fd88","Type":"ContainerStarted","Data":"87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19"} Nov 23 04:28:33 crc kubenswrapper[4751]: I1123 04:28:33.135898 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" podStartSLOduration=1.4976280690000001 podStartE2EDuration="2.135878327s" podCreationTimestamp="2025-11-23 04:28:31 +0000 UTC" firstStartedPulling="2025-11-23 04:28:32.167813159 +0000 UTC m=+2008.361484518" lastFinishedPulling="2025-11-23 04:28:32.806063407 +0000 UTC m=+2008.999734776" observedRunningTime="2025-11-23 04:28:33.134446569 +0000 UTC m=+2009.328117928" watchObservedRunningTime="2025-11-23 04:28:33.135878327 +0000 UTC m=+2009.329549686" Nov 23 04:28:38 crc kubenswrapper[4751]: I1123 04:28:38.114801 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:28:38 crc kubenswrapper[4751]: I1123 04:28:38.115475 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:28:38 crc kubenswrapper[4751]: I1123 04:28:38.115535 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:28:38 crc kubenswrapper[4751]: I1123 04:28:38.116538 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:28:38 crc kubenswrapper[4751]: I1123 04:28:38.116612 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7" gracePeriod=600 Nov 23 04:28:39 crc kubenswrapper[4751]: I1123 04:28:39.192661 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7" exitCode=0 Nov 23 04:28:39 crc kubenswrapper[4751]: I1123 04:28:39.192758 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7"} Nov 23 04:28:39 crc kubenswrapper[4751]: I1123 04:28:39.193689 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b"} Nov 23 04:28:39 crc kubenswrapper[4751]: I1123 04:28:39.193724 4751 scope.go:117] "RemoveContainer" containerID="da79c37d46b69ae274874b1af04cee7419cb54d8cc648b5dc524c1d6da161394" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.023201 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.026067 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.048522 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.110839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nts\" (UniqueName: \"kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.111155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.111312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.213510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.213593 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.213745 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nts\" (UniqueName: \"kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.214225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.214248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.234490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nts\" (UniqueName: \"kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts\") pod \"certified-operators-wdkrm\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.348460 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:08 crc kubenswrapper[4751]: I1123 04:29:08.885472 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:08 crc kubenswrapper[4751]: W1123 04:29:08.891972 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d3af38_d0dc_4583_a609_d3568ee074be.slice/crio-8e0857b018f212d465f58a98f6307bf6d31bf6fea5d576f1511af7e88e892d12 WatchSource:0}: Error finding container 8e0857b018f212d465f58a98f6307bf6d31bf6fea5d576f1511af7e88e892d12: Status 404 returned error can't find the container with id 8e0857b018f212d465f58a98f6307bf6d31bf6fea5d576f1511af7e88e892d12 Nov 23 04:29:09 crc kubenswrapper[4751]: I1123 04:29:09.511916 4751 generic.go:334] "Generic (PLEG): container finished" podID="60d3af38-d0dc-4583-a609-d3568ee074be" containerID="5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1" exitCode=0 Nov 23 04:29:09 crc kubenswrapper[4751]: I1123 04:29:09.512008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerDied","Data":"5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1"} Nov 23 04:29:09 crc kubenswrapper[4751]: I1123 04:29:09.512067 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerStarted","Data":"8e0857b018f212d465f58a98f6307bf6d31bf6fea5d576f1511af7e88e892d12"} Nov 23 04:29:10 crc kubenswrapper[4751]: I1123 04:29:10.526160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerStarted","Data":"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3"} Nov 23 04:29:11 crc kubenswrapper[4751]: I1123 04:29:11.539175 4751 generic.go:334] "Generic (PLEG): container finished" podID="60d3af38-d0dc-4583-a609-d3568ee074be" containerID="39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3" exitCode=0 Nov 23 04:29:11 crc kubenswrapper[4751]: I1123 04:29:11.540551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerDied","Data":"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3"} Nov 23 04:29:12 crc kubenswrapper[4751]: I1123 04:29:12.554801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerStarted","Data":"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8"} Nov 23 04:29:12 crc kubenswrapper[4751]: I1123 04:29:12.576740 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdkrm" podStartSLOduration=2.128302865 podStartE2EDuration="4.576725429s" podCreationTimestamp="2025-11-23 04:29:08 +0000 UTC" firstStartedPulling="2025-11-23 04:29:09.51720407 +0000 UTC m=+2045.710875449" lastFinishedPulling="2025-11-23 04:29:11.965626654 +0000 UTC m=+2048.159298013" observedRunningTime="2025-11-23 04:29:12.574229304 +0000 UTC m=+2048.767900663" watchObservedRunningTime="2025-11-23 04:29:12.576725429 +0000 UTC m=+2048.770396778" Nov 23 04:29:18 crc kubenswrapper[4751]: I1123 04:29:18.348888 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:18 crc kubenswrapper[4751]: I1123 04:29:18.351824 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:18 crc kubenswrapper[4751]: I1123 04:29:18.439058 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:18 crc kubenswrapper[4751]: I1123 04:29:18.699618 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:18 crc kubenswrapper[4751]: I1123 04:29:18.763855 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:20 crc kubenswrapper[4751]: I1123 04:29:20.636198 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wdkrm" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="registry-server" containerID="cri-o://64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8" gracePeriod=2 Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.567529 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.648481 4751 generic.go:334] "Generic (PLEG): container finished" podID="60d3af38-d0dc-4583-a609-d3568ee074be" containerID="64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8" exitCode=0 Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.648523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerDied","Data":"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8"} Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.648546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdkrm" event={"ID":"60d3af38-d0dc-4583-a609-d3568ee074be","Type":"ContainerDied","Data":"8e0857b018f212d465f58a98f6307bf6d31bf6fea5d576f1511af7e88e892d12"} Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.648562 4751 scope.go:117] "RemoveContainer" containerID="64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.648673 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdkrm" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.668531 4751 scope.go:117] "RemoveContainer" containerID="39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.688011 4751 scope.go:117] "RemoveContainer" containerID="5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.703526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content\") pod \"60d3af38-d0dc-4583-a609-d3568ee074be\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.703683 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5nts\" (UniqueName: \"kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts\") pod \"60d3af38-d0dc-4583-a609-d3568ee074be\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.703807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities\") pod \"60d3af38-d0dc-4583-a609-d3568ee074be\" (UID: \"60d3af38-d0dc-4583-a609-d3568ee074be\") " Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.704447 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities" (OuterVolumeSpecName: "utilities") pod "60d3af38-d0dc-4583-a609-d3568ee074be" (UID: "60d3af38-d0dc-4583-a609-d3568ee074be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.714440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts" (OuterVolumeSpecName: "kube-api-access-h5nts") pod "60d3af38-d0dc-4583-a609-d3568ee074be" (UID: "60d3af38-d0dc-4583-a609-d3568ee074be"). InnerVolumeSpecName "kube-api-access-h5nts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.746151 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60d3af38-d0dc-4583-a609-d3568ee074be" (UID: "60d3af38-d0dc-4583-a609-d3568ee074be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.800934 4751 scope.go:117] "RemoveContainer" containerID="64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8" Nov 23 04:29:21 crc kubenswrapper[4751]: E1123 04:29:21.801530 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8\": container with ID starting with 64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8 not found: ID does not exist" containerID="64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.801577 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8"} err="failed to get container status \"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8\": rpc error: code = NotFound desc = could not find container \"64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8\": container with ID starting with 64f08a388ac0475bb04a40f459b3a1c427c15194aadf8ff49afc53bfd648bee8 not found: ID does not exist" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.801630 4751 scope.go:117] "RemoveContainer" containerID="39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3" Nov 23 04:29:21 crc kubenswrapper[4751]: E1123 04:29:21.801926 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3\": container with ID starting with 39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3 not found: ID does not exist" containerID="39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.801974 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3"} err="failed to get container status \"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3\": rpc error: code = NotFound desc = could not find container \"39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3\": container with ID starting with 39b8ca8f94fc8abc88356bcb8d844c91b6f33824a06bb6c55bf1a617ac214ec3 not found: ID does not exist" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.802001 4751 scope.go:117] "RemoveContainer" containerID="5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1" Nov 23 04:29:21 crc kubenswrapper[4751]: E1123 04:29:21.802237 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1\": container with ID starting with 5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1 not found: ID does not exist" containerID="5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.802263 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1"} err="failed to get container status \"5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1\": rpc error: code = NotFound desc = could not find container \"5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1\": container with ID starting with 5f6a8a9af3b2ef2090e8062789e4e2990e18399c1db600013ef4cab540411af1 not found: ID does not exist" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.805795 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.805820 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5nts\" (UniqueName: \"kubernetes.io/projected/60d3af38-d0dc-4583-a609-d3568ee074be-kube-api-access-h5nts\") on node \"crc\" DevicePath \"\"" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.805832 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d3af38-d0dc-4583-a609-d3568ee074be-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.982224 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:21 crc kubenswrapper[4751]: I1123 04:29:21.990101 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wdkrm"] Nov 23 04:29:22 crc kubenswrapper[4751]: I1123 04:29:22.656975 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" path="/var/lib/kubelet/pods/60d3af38-d0dc-4583-a609-d3568ee074be/volumes" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.167237 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4"] Nov 23 04:30:00 crc kubenswrapper[4751]: E1123 04:30:00.168087 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="registry-server" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.168101 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="registry-server" Nov 23 04:30:00 crc kubenswrapper[4751]: E1123 04:30:00.168118 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="extract-content" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.168125 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="extract-content" Nov 23 04:30:00 crc kubenswrapper[4751]: E1123 04:30:00.168148 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="extract-utilities" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.168157 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="extract-utilities" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.168392 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d3af38-d0dc-4583-a609-d3568ee074be" containerName="registry-server" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.169081 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.171240 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.172682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.197497 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4"] Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.305954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.306004 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4hp\" (UniqueName: \"kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.306467 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.408381 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.408488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.408533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4hp\" (UniqueName: \"kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.409248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.420471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.427265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4hp\" (UniqueName: \"kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp\") pod \"collect-profiles-29397870-8sqq4\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.523760 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:00 crc kubenswrapper[4751]: I1123 04:30:00.977261 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4"] Nov 23 04:30:01 crc kubenswrapper[4751]: I1123 04:30:01.065801 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" event={"ID":"2cabe535-766c-4fe3-ab36-fe2318e84b70","Type":"ContainerStarted","Data":"dd6e04ddc84ae1e58f96812f34ddf8230f79ba3bd888809a1cd57479c67320c6"} Nov 23 04:30:02 crc kubenswrapper[4751]: I1123 04:30:02.075713 4751 generic.go:334] "Generic (PLEG): container finished" podID="2cabe535-766c-4fe3-ab36-fe2318e84b70" containerID="171e9830357d2bfe8ab9304b3b21fc0eaa6616ebd403b49e12b36301ed3639f2" exitCode=0 Nov 23 04:30:02 crc kubenswrapper[4751]: I1123 04:30:02.075758 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" event={"ID":"2cabe535-766c-4fe3-ab36-fe2318e84b70","Type":"ContainerDied","Data":"171e9830357d2bfe8ab9304b3b21fc0eaa6616ebd403b49e12b36301ed3639f2"} Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.464630 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.567053 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh4hp\" (UniqueName: \"kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp\") pod \"2cabe535-766c-4fe3-ab36-fe2318e84b70\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.567261 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume\") pod \"2cabe535-766c-4fe3-ab36-fe2318e84b70\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.567449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume\") pod \"2cabe535-766c-4fe3-ab36-fe2318e84b70\" (UID: \"2cabe535-766c-4fe3-ab36-fe2318e84b70\") " Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.568162 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cabe535-766c-4fe3-ab36-fe2318e84b70" (UID: "2cabe535-766c-4fe3-ab36-fe2318e84b70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.574546 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cabe535-766c-4fe3-ab36-fe2318e84b70" (UID: "2cabe535-766c-4fe3-ab36-fe2318e84b70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.575387 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp" (OuterVolumeSpecName: "kube-api-access-fh4hp") pod "2cabe535-766c-4fe3-ab36-fe2318e84b70" (UID: "2cabe535-766c-4fe3-ab36-fe2318e84b70"). InnerVolumeSpecName "kube-api-access-fh4hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.669422 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cabe535-766c-4fe3-ab36-fe2318e84b70-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.669456 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cabe535-766c-4fe3-ab36-fe2318e84b70-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:03 crc kubenswrapper[4751]: I1123 04:30:03.669466 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh4hp\" (UniqueName: \"kubernetes.io/projected/2cabe535-766c-4fe3-ab36-fe2318e84b70-kube-api-access-fh4hp\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.099534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" event={"ID":"2cabe535-766c-4fe3-ab36-fe2318e84b70","Type":"ContainerDied","Data":"dd6e04ddc84ae1e58f96812f34ddf8230f79ba3bd888809a1cd57479c67320c6"} Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.099599 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6e04ddc84ae1e58f96812f34ddf8230f79ba3bd888809a1cd57479c67320c6" Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.099657 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397870-8sqq4" Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.571142 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz"] Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.581276 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397825-txtpz"] Nov 23 04:30:04 crc kubenswrapper[4751]: I1123 04:30:04.691222 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b284192d-dce2-4f47-b7bf-b44841965150" path="/var/lib/kubelet/pods/b284192d-dce2-4f47-b7bf-b44841965150/volumes" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.117560 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:18 crc kubenswrapper[4751]: E1123 04:30:18.118546 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cabe535-766c-4fe3-ab36-fe2318e84b70" containerName="collect-profiles" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.118561 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cabe535-766c-4fe3-ab36-fe2318e84b70" containerName="collect-profiles" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.118790 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cabe535-766c-4fe3-ab36-fe2318e84b70" containerName="collect-profiles" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.120371 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.134080 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.271874 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8sg\" (UniqueName: \"kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.272358 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.272390 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.374407 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.374475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.374564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8sg\" (UniqueName: \"kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.375091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.375518 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.394416 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8sg\" (UniqueName: \"kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg\") pod \"redhat-operators-x4l7j\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.453065 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:18 crc kubenswrapper[4751]: I1123 04:30:18.917798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:19 crc kubenswrapper[4751]: I1123 04:30:19.257171 4751 generic.go:334] "Generic (PLEG): container finished" podID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerID="6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2" exitCode=0 Nov 23 04:30:19 crc kubenswrapper[4751]: I1123 04:30:19.258559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerDied","Data":"6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2"} Nov 23 04:30:19 crc kubenswrapper[4751]: I1123 04:30:19.259568 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerStarted","Data":"663de1eafba8dbcb9314b9cd85f4de36ece4c9bb565739c6606d118f5bb75ce4"} Nov 23 04:30:20 crc kubenswrapper[4751]: I1123 04:30:20.269667 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerStarted","Data":"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25"} Nov 23 04:30:21 crc kubenswrapper[4751]: I1123 04:30:21.391063 4751 scope.go:117] "RemoveContainer" containerID="1b0edbb0fd435f5a3758c6913286143cbb8c31676a3ed6522ef2fc6a8c7bb955" Nov 23 04:30:22 crc kubenswrapper[4751]: I1123 04:30:22.297114 4751 generic.go:334] "Generic (PLEG): container finished" podID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerID="46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25" exitCode=0 Nov 23 04:30:22 crc kubenswrapper[4751]: I1123 04:30:22.297235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerDied","Data":"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25"} Nov 23 04:30:24 crc kubenswrapper[4751]: I1123 04:30:24.319435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerStarted","Data":"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e"} Nov 23 04:30:24 crc kubenswrapper[4751]: I1123 04:30:24.343373 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4l7j" podStartSLOduration=2.005415879 podStartE2EDuration="6.343354165s" podCreationTimestamp="2025-11-23 04:30:18 +0000 UTC" firstStartedPulling="2025-11-23 04:30:19.260241604 +0000 UTC m=+2115.453912963" lastFinishedPulling="2025-11-23 04:30:23.59817986 +0000 UTC m=+2119.791851249" observedRunningTime="2025-11-23 04:30:24.336169316 +0000 UTC m=+2120.529840675" watchObservedRunningTime="2025-11-23 04:30:24.343354165 +0000 UTC m=+2120.537025514" Nov 23 04:30:28 crc kubenswrapper[4751]: I1123 04:30:28.453697 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:28 crc kubenswrapper[4751]: I1123 04:30:28.455645 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:29 crc kubenswrapper[4751]: I1123 04:30:29.545936 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4l7j" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="registry-server" probeResult="failure" output=< Nov 23 04:30:29 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:30:29 crc kubenswrapper[4751]: > Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.013260 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.016095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.037664 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.147896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2stc\" (UniqueName: \"kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.148021 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.148513 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.250864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.251050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.251095 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2stc\" (UniqueName: \"kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.251381 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.251553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.269923 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2stc\" (UniqueName: \"kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc\") pod \"redhat-marketplace-k5btr\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.337190 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:30 crc kubenswrapper[4751]: I1123 04:30:30.785493 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:31 crc kubenswrapper[4751]: I1123 04:30:31.405075 4751 generic.go:334] "Generic (PLEG): container finished" podID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerID="25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a" exitCode=0 Nov 23 04:30:31 crc kubenswrapper[4751]: I1123 04:30:31.405136 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerDied","Data":"25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a"} Nov 23 04:30:31 crc kubenswrapper[4751]: I1123 04:30:31.405617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerStarted","Data":"b18813426e8ee07c8c5d0cb981f6f17cb1f0f5de8fbd71ba7f27e27898e44af5"} Nov 23 04:30:32 crc kubenswrapper[4751]: I1123 04:30:32.419618 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerStarted","Data":"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334"} Nov 23 04:30:33 crc kubenswrapper[4751]: I1123 04:30:33.437082 4751 generic.go:334] "Generic (PLEG): container finished" podID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerID="08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334" exitCode=0 Nov 23 04:30:33 crc kubenswrapper[4751]: I1123 04:30:33.437220 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerDied","Data":"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334"} Nov 23 04:30:34 crc kubenswrapper[4751]: I1123 04:30:34.449973 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerStarted","Data":"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a"} Nov 23 04:30:34 crc kubenswrapper[4751]: I1123 04:30:34.482867 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5btr" podStartSLOduration=3.042294625 podStartE2EDuration="5.482837683s" podCreationTimestamp="2025-11-23 04:30:29 +0000 UTC" firstStartedPulling="2025-11-23 04:30:31.407416116 +0000 UTC m=+2127.601087475" lastFinishedPulling="2025-11-23 04:30:33.847959174 +0000 UTC m=+2130.041630533" observedRunningTime="2025-11-23 04:30:34.472146602 +0000 UTC m=+2130.665817961" watchObservedRunningTime="2025-11-23 04:30:34.482837683 +0000 UTC m=+2130.676509092" Nov 23 04:30:38 crc kubenswrapper[4751]: I1123 04:30:38.115415 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:30:38 crc kubenswrapper[4751]: I1123 04:30:38.116335 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:30:38 crc kubenswrapper[4751]: I1123 04:30:38.510822 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:38 crc kubenswrapper[4751]: I1123 04:30:38.587275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:38 crc kubenswrapper[4751]: I1123 04:30:38.765022 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:40 crc kubenswrapper[4751]: I1123 04:30:40.337689 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:40 crc kubenswrapper[4751]: I1123 04:30:40.338090 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:40 crc kubenswrapper[4751]: I1123 04:30:40.410693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:40 crc kubenswrapper[4751]: I1123 04:30:40.511252 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4l7j" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="registry-server" containerID="cri-o://5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e" gracePeriod=2 Nov 23 04:30:40 crc kubenswrapper[4751]: I1123 04:30:40.590876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.037267 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.158732 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.171236 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities\") pod \"1ad9a511-b000-42e3-9592-37ce6d105d4b\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.171422 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8sg\" (UniqueName: \"kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg\") pod \"1ad9a511-b000-42e3-9592-37ce6d105d4b\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.171526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content\") pod \"1ad9a511-b000-42e3-9592-37ce6d105d4b\" (UID: \"1ad9a511-b000-42e3-9592-37ce6d105d4b\") " Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.178683 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities" (OuterVolumeSpecName: "utilities") pod "1ad9a511-b000-42e3-9592-37ce6d105d4b" (UID: "1ad9a511-b000-42e3-9592-37ce6d105d4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.183758 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg" (OuterVolumeSpecName: "kube-api-access-jj8sg") pod "1ad9a511-b000-42e3-9592-37ce6d105d4b" (UID: "1ad9a511-b000-42e3-9592-37ce6d105d4b"). InnerVolumeSpecName "kube-api-access-jj8sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.246544 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ad9a511-b000-42e3-9592-37ce6d105d4b" (UID: "1ad9a511-b000-42e3-9592-37ce6d105d4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.274000 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.274036 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8sg\" (UniqueName: \"kubernetes.io/projected/1ad9a511-b000-42e3-9592-37ce6d105d4b-kube-api-access-jj8sg\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.274049 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ad9a511-b000-42e3-9592-37ce6d105d4b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.525422 4751 generic.go:334] "Generic (PLEG): container finished" podID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerID="5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e" exitCode=0 Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.525502 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerDied","Data":"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e"} Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.525529 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4l7j" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.525554 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4l7j" event={"ID":"1ad9a511-b000-42e3-9592-37ce6d105d4b","Type":"ContainerDied","Data":"663de1eafba8dbcb9314b9cd85f4de36ece4c9bb565739c6606d118f5bb75ce4"} Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.525577 4751 scope.go:117] "RemoveContainer" containerID="5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.555763 4751 scope.go:117] "RemoveContainer" containerID="46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.578610 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.596000 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4l7j"] Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.606781 4751 scope.go:117] "RemoveContainer" containerID="6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.649471 4751 scope.go:117] "RemoveContainer" containerID="5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e" Nov 23 04:30:41 crc kubenswrapper[4751]: E1123 04:30:41.649888 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e\": container with ID starting with 5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e not found: ID does not exist" containerID="5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.649932 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e"} err="failed to get container status \"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e\": rpc error: code = NotFound desc = could not find container \"5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e\": container with ID starting with 5bfea462dc047ac41f4437af642e9b354e71e75d8ac73c5aa215374939121d8e not found: ID does not exist" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.649959 4751 scope.go:117] "RemoveContainer" containerID="46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25" Nov 23 04:30:41 crc kubenswrapper[4751]: E1123 04:30:41.650520 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25\": container with ID starting with 46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25 not found: ID does not exist" containerID="46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.650555 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25"} err="failed to get container status \"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25\": rpc error: code = NotFound desc = could not find container \"46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25\": container with ID starting with 46aa68d7388eed3f61c086479553f57d39e4f5c1790c0907bf27f69e5ba2be25 not found: ID does not exist" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.650581 4751 scope.go:117] "RemoveContainer" containerID="6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2" Nov 23 04:30:41 crc kubenswrapper[4751]: E1123 04:30:41.650922 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2\": container with ID starting with 6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2 not found: ID does not exist" containerID="6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2" Nov 23 04:30:41 crc kubenswrapper[4751]: I1123 04:30:41.650954 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2"} err="failed to get container status \"6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2\": rpc error: code = NotFound desc = could not find container \"6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2\": container with ID starting with 6330c368fee77b1e681ea8bdbe7878e0c7604712202bcfffa48b7cc5bc2717d2 not found: ID does not exist" Nov 23 04:30:42 crc kubenswrapper[4751]: I1123 04:30:42.538875 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5btr" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="registry-server" containerID="cri-o://464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a" gracePeriod=2 Nov 23 04:30:42 crc kubenswrapper[4751]: I1123 04:30:42.664631 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" path="/var/lib/kubelet/pods/1ad9a511-b000-42e3-9592-37ce6d105d4b/volumes" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.096214 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.229103 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities\") pod \"241f67d1-21ef-41f4-8822-c425dc6c9aee\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.229395 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content\") pod \"241f67d1-21ef-41f4-8822-c425dc6c9aee\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.229444 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2stc\" (UniqueName: \"kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc\") pod \"241f67d1-21ef-41f4-8822-c425dc6c9aee\" (UID: \"241f67d1-21ef-41f4-8822-c425dc6c9aee\") " Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.230775 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities" (OuterVolumeSpecName: "utilities") pod "241f67d1-21ef-41f4-8822-c425dc6c9aee" (UID: "241f67d1-21ef-41f4-8822-c425dc6c9aee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.236368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc" (OuterVolumeSpecName: "kube-api-access-x2stc") pod "241f67d1-21ef-41f4-8822-c425dc6c9aee" (UID: "241f67d1-21ef-41f4-8822-c425dc6c9aee"). InnerVolumeSpecName "kube-api-access-x2stc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.279827 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "241f67d1-21ef-41f4-8822-c425dc6c9aee" (UID: "241f67d1-21ef-41f4-8822-c425dc6c9aee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.331481 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.331517 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2stc\" (UniqueName: \"kubernetes.io/projected/241f67d1-21ef-41f4-8822-c425dc6c9aee-kube-api-access-x2stc\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.331529 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/241f67d1-21ef-41f4-8822-c425dc6c9aee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.553642 4751 generic.go:334] "Generic (PLEG): container finished" podID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerID="464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a" exitCode=0 Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.553691 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5btr" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.553721 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerDied","Data":"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a"} Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.554194 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5btr" event={"ID":"241f67d1-21ef-41f4-8822-c425dc6c9aee","Type":"ContainerDied","Data":"b18813426e8ee07c8c5d0cb981f6f17cb1f0f5de8fbd71ba7f27e27898e44af5"} Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.554274 4751 scope.go:117] "RemoveContainer" containerID="464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.580717 4751 scope.go:117] "RemoveContainer" containerID="08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.596580 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.604139 4751 scope.go:117] "RemoveContainer" containerID="25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.607410 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5btr"] Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.654956 4751 scope.go:117] "RemoveContainer" containerID="464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a" Nov 23 04:30:43 crc kubenswrapper[4751]: E1123 04:30:43.655529 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a\": container with ID starting with 464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a not found: ID does not exist" containerID="464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.655607 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a"} err="failed to get container status \"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a\": rpc error: code = NotFound desc = could not find container \"464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a\": container with ID starting with 464d80a8621e49e917019f78e1cc253529d757b7f6fb749a82faa749b8cb0a2a not found: ID does not exist" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.655632 4751 scope.go:117] "RemoveContainer" containerID="08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334" Nov 23 04:30:43 crc kubenswrapper[4751]: E1123 04:30:43.655978 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334\": container with ID starting with 08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334 not found: ID does not exist" containerID="08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.655999 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334"} err="failed to get container status \"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334\": rpc error: code = NotFound desc = could not find container \"08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334\": container with ID starting with 08e3b0e9fff5f655518ad53a3b4a53d9f08adf86415a695984c0a4061f830334 not found: ID does not exist" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.656015 4751 scope.go:117] "RemoveContainer" containerID="25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a" Nov 23 04:30:43 crc kubenswrapper[4751]: E1123 04:30:43.656299 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a\": container with ID starting with 25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a not found: ID does not exist" containerID="25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a" Nov 23 04:30:43 crc kubenswrapper[4751]: I1123 04:30:43.656319 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a"} err="failed to get container status \"25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a\": rpc error: code = NotFound desc = could not find container \"25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a\": container with ID starting with 25a0aea9fa87a4e6219df5b08bddbeff4866698e970a7cb992a2bb86df51860a not found: ID does not exist" Nov 23 04:30:44 crc kubenswrapper[4751]: I1123 04:30:44.663336 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" path="/var/lib/kubelet/pods/241f67d1-21ef-41f4-8822-c425dc6c9aee/volumes" Nov 23 04:31:08 crc kubenswrapper[4751]: I1123 04:31:08.114760 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:31:08 crc kubenswrapper[4751]: I1123 04:31:08.115240 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.114535 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.115029 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.115063 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.115820 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.115864 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" gracePeriod=600 Nov 23 04:31:38 crc kubenswrapper[4751]: E1123 04:31:38.240510 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.566638 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" exitCode=0 Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.566688 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b"} Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.566729 4751 scope.go:117] "RemoveContainer" containerID="c1e7e8b52e36f653eea0fa20c8e4329e8aee9acb57f91ce86f6262e362e65ce7" Nov 23 04:31:38 crc kubenswrapper[4751]: I1123 04:31:38.567313 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:31:38 crc kubenswrapper[4751]: E1123 04:31:38.567754 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:31:52 crc kubenswrapper[4751]: I1123 04:31:52.643747 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:31:52 crc kubenswrapper[4751]: E1123 04:31:52.644642 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:32:05 crc kubenswrapper[4751]: I1123 04:32:05.666925 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:32:05 crc kubenswrapper[4751]: E1123 04:32:05.668985 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:32:19 crc kubenswrapper[4751]: I1123 04:32:19.644606 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:32:19 crc kubenswrapper[4751]: E1123 04:32:19.647161 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:32:30 crc kubenswrapper[4751]: I1123 04:32:30.644826 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:32:30 crc kubenswrapper[4751]: E1123 04:32:30.645740 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:32:44 crc kubenswrapper[4751]: I1123 04:32:44.661739 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:32:44 crc kubenswrapper[4751]: E1123 04:32:44.662895 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:32:59 crc kubenswrapper[4751]: I1123 04:32:59.645226 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:32:59 crc kubenswrapper[4751]: E1123 04:32:59.646194 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:33:10 crc kubenswrapper[4751]: I1123 04:33:10.917303 4751 generic.go:334] "Generic (PLEG): container finished" podID="b1b8004d-68f3-41c1-ac68-2b35a527fd88" containerID="5d70de436aae7b96a8d3b2f1bc0571051f2429e501fe3f0139d9cafc345afb31" exitCode=0 Nov 23 04:33:10 crc kubenswrapper[4751]: I1123 04:33:10.917375 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" event={"ID":"b1b8004d-68f3-41c1-ac68-2b35a527fd88","Type":"ContainerDied","Data":"5d70de436aae7b96a8d3b2f1bc0571051f2429e501fe3f0139d9cafc345afb31"} Nov 23 04:33:11 crc kubenswrapper[4751]: I1123 04:33:11.645229 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:33:11 crc kubenswrapper[4751]: E1123 04:33:11.645714 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.459337 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.488731 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key\") pod \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.488849 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0\") pod \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.488974 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory\") pod \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.489058 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6899\" (UniqueName: \"kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899\") pod \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.489093 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle\") pod \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\" (UID: \"b1b8004d-68f3-41c1-ac68-2b35a527fd88\") " Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.497032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899" (OuterVolumeSpecName: "kube-api-access-m6899") pod "b1b8004d-68f3-41c1-ac68-2b35a527fd88" (UID: "b1b8004d-68f3-41c1-ac68-2b35a527fd88"). InnerVolumeSpecName "kube-api-access-m6899". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.504615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b1b8004d-68f3-41c1-ac68-2b35a527fd88" (UID: "b1b8004d-68f3-41c1-ac68-2b35a527fd88"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.535513 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b1b8004d-68f3-41c1-ac68-2b35a527fd88" (UID: "b1b8004d-68f3-41c1-ac68-2b35a527fd88"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.538657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory" (OuterVolumeSpecName: "inventory") pod "b1b8004d-68f3-41c1-ac68-2b35a527fd88" (UID: "b1b8004d-68f3-41c1-ac68-2b35a527fd88"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.546403 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1b8004d-68f3-41c1-ac68-2b35a527fd88" (UID: "b1b8004d-68f3-41c1-ac68-2b35a527fd88"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.591307 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.591345 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.591377 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.591389 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6899\" (UniqueName: \"kubernetes.io/projected/b1b8004d-68f3-41c1-ac68-2b35a527fd88-kube-api-access-m6899\") on node \"crc\" DevicePath \"\"" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.591400 4751 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b8004d-68f3-41c1-ac68-2b35a527fd88-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.942620 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" event={"ID":"b1b8004d-68f3-41c1-ac68-2b35a527fd88","Type":"ContainerDied","Data":"87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19"} Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.942675 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87de9b7056cc045399d973afe9e3f092a837bac2d4c121dcc686e95e9d7f6e19" Nov 23 04:33:12 crc kubenswrapper[4751]: I1123 04:33:12.942687 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-44fjn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.053278 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn"] Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.053933 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b8004d-68f3-41c1-ac68-2b35a527fd88" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.053955 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b8004d-68f3-41c1-ac68-2b35a527fd88" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.053989 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054001 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.054033 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="extract-content" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054046 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="extract-content" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.054082 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="extract-utilities" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054095 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="extract-utilities" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.054117 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="extract-utilities" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054130 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="extract-utilities" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.054159 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054171 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: E1123 04:33:13.054192 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="extract-content" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054204 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="extract-content" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054559 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b8004d-68f3-41c1-ac68-2b35a527fd88" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054588 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="241f67d1-21ef-41f4-8822-c425dc6c9aee" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.054611 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad9a511-b000-42e3-9592-37ce6d105d4b" containerName="registry-server" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.055667 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062566 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062849 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062673 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062740 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062780 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062783 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.062797 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.071824 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn"] Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100626 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100735 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100780 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67fs\" (UniqueName: \"kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100874 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.100931 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.101028 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.101108 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67fs\" (UniqueName: \"kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203421 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203711 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203757 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.203826 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.206050 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.207903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.208765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.210224 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.210757 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.211492 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.212462 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.213198 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.227996 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67fs\" (UniqueName: \"kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4nzvn\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.375533 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.955309 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn"] Nov 23 04:33:13 crc kubenswrapper[4751]: W1123 04:33:13.958526 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88fdef25_3ea0_48cf_8c54_22776698b6dc.slice/crio-fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff WatchSource:0}: Error finding container fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff: Status 404 returned error can't find the container with id fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff Nov 23 04:33:13 crc kubenswrapper[4751]: I1123 04:33:13.960602 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:33:14 crc kubenswrapper[4751]: I1123 04:33:14.968705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" event={"ID":"88fdef25-3ea0-48cf-8c54-22776698b6dc","Type":"ContainerStarted","Data":"0f41e79b24cab7af2a50aef134602145d5b83d3884f2af49e5ed2550a2d4af39"} Nov 23 04:33:14 crc kubenswrapper[4751]: I1123 04:33:14.969174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" event={"ID":"88fdef25-3ea0-48cf-8c54-22776698b6dc","Type":"ContainerStarted","Data":"fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff"} Nov 23 04:33:15 crc kubenswrapper[4751]: I1123 04:33:15.009746 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" podStartSLOduration=1.5737177770000002 podStartE2EDuration="2.009717989s" podCreationTimestamp="2025-11-23 04:33:13 +0000 UTC" firstStartedPulling="2025-11-23 04:33:13.960268923 +0000 UTC m=+2290.153940292" lastFinishedPulling="2025-11-23 04:33:14.396269145 +0000 UTC m=+2290.589940504" observedRunningTime="2025-11-23 04:33:14.993063708 +0000 UTC m=+2291.186735107" watchObservedRunningTime="2025-11-23 04:33:15.009717989 +0000 UTC m=+2291.203389378" Nov 23 04:33:24 crc kubenswrapper[4751]: I1123 04:33:24.658045 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:33:24 crc kubenswrapper[4751]: E1123 04:33:24.659004 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:33:38 crc kubenswrapper[4751]: I1123 04:33:38.644731 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:33:38 crc kubenswrapper[4751]: E1123 04:33:38.646005 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:33:50 crc kubenswrapper[4751]: I1123 04:33:50.644400 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:33:50 crc kubenswrapper[4751]: E1123 04:33:50.645217 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:34:05 crc kubenswrapper[4751]: I1123 04:34:05.644222 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:34:05 crc kubenswrapper[4751]: E1123 04:34:05.645031 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:34:20 crc kubenswrapper[4751]: I1123 04:34:20.643921 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:34:20 crc kubenswrapper[4751]: E1123 04:34:20.644809 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:34:32 crc kubenswrapper[4751]: I1123 04:34:32.644417 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:34:32 crc kubenswrapper[4751]: E1123 04:34:32.645394 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:34:44 crc kubenswrapper[4751]: I1123 04:34:44.657801 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:34:44 crc kubenswrapper[4751]: E1123 04:34:44.659104 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:34:57 crc kubenswrapper[4751]: I1123 04:34:57.645907 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:34:57 crc kubenswrapper[4751]: E1123 04:34:57.647339 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:35:09 crc kubenswrapper[4751]: I1123 04:35:09.644857 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:35:09 crc kubenswrapper[4751]: E1123 04:35:09.646025 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:35:20 crc kubenswrapper[4751]: I1123 04:35:20.644077 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:35:20 crc kubenswrapper[4751]: E1123 04:35:20.645004 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:35:32 crc kubenswrapper[4751]: I1123 04:35:32.644981 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:35:32 crc kubenswrapper[4751]: E1123 04:35:32.646197 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:35:45 crc kubenswrapper[4751]: I1123 04:35:45.644442 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:35:45 crc kubenswrapper[4751]: E1123 04:35:45.645817 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:35:56 crc kubenswrapper[4751]: I1123 04:35:56.645311 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:35:56 crc kubenswrapper[4751]: E1123 04:35:56.646669 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:36:08 crc kubenswrapper[4751]: I1123 04:36:08.645162 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:36:08 crc kubenswrapper[4751]: E1123 04:36:08.646527 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:36:22 crc kubenswrapper[4751]: I1123 04:36:22.008697 4751 generic.go:334] "Generic (PLEG): container finished" podID="88fdef25-3ea0-48cf-8c54-22776698b6dc" containerID="0f41e79b24cab7af2a50aef134602145d5b83d3884f2af49e5ed2550a2d4af39" exitCode=0 Nov 23 04:36:22 crc kubenswrapper[4751]: I1123 04:36:22.008787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" event={"ID":"88fdef25-3ea0-48cf-8c54-22776698b6dc","Type":"ContainerDied","Data":"0f41e79b24cab7af2a50aef134602145d5b83d3884f2af49e5ed2550a2d4af39"} Nov 23 04:36:22 crc kubenswrapper[4751]: I1123 04:36:22.644140 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:36:22 crc kubenswrapper[4751]: E1123 04:36:22.644456 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.515286 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.654896 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.654944 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.655047 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.655099 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.656139 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.656493 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.656525 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67fs\" (UniqueName: \"kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.656548 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.656631 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle\") pod \"88fdef25-3ea0-48cf-8c54-22776698b6dc\" (UID: \"88fdef25-3ea0-48cf-8c54-22776698b6dc\") " Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.662748 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs" (OuterVolumeSpecName: "kube-api-access-f67fs") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "kube-api-access-f67fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.663630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.688246 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.706587 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory" (OuterVolumeSpecName: "inventory") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.709640 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.710679 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.712988 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.725171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.734256 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "88fdef25-3ea0-48cf-8c54-22776698b6dc" (UID: "88fdef25-3ea0-48cf-8c54-22776698b6dc"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759220 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759254 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759266 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67fs\" (UniqueName: \"kubernetes.io/projected/88fdef25-3ea0-48cf-8c54-22776698b6dc-kube-api-access-f67fs\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759274 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759283 4751 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759293 4751 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759301 4751 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759310 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88fdef25-3ea0-48cf-8c54-22776698b6dc-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:23 crc kubenswrapper[4751]: I1123 04:36:23.759319 4751 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/88fdef25-3ea0-48cf-8c54-22776698b6dc-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.033974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" event={"ID":"88fdef25-3ea0-48cf-8c54-22776698b6dc","Type":"ContainerDied","Data":"fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff"} Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.034016 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe773507197add775f604b8d806b1a16452f6e7f837837e5298d05bb7c17abff" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.034182 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4nzvn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.205522 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn"] Nov 23 04:36:24 crc kubenswrapper[4751]: E1123 04:36:24.206025 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fdef25-3ea0-48cf-8c54-22776698b6dc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.206050 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fdef25-3ea0-48cf-8c54-22776698b6dc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.206279 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fdef25-3ea0-48cf-8c54-22776698b6dc" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.207120 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.209467 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.209590 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.209866 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vcqd2" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.209924 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.216203 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn"] Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.217492 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372275 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372394 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.372986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zt4\" (UniqueName: \"kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.474872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.474978 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.475037 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.475158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.475229 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.475342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.475531 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zt4\" (UniqueName: \"kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.480755 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.480822 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.481410 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.482038 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.483029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.483968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.507004 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zt4\" (UniqueName: \"kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.530762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:36:24 crc kubenswrapper[4751]: I1123 04:36:24.910707 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn"] Nov 23 04:36:25 crc kubenswrapper[4751]: I1123 04:36:25.044116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" event={"ID":"8d72beb8-693c-4168-99d2-219a12911413","Type":"ContainerStarted","Data":"b97bc3d909ff942a03df6d2d69a43e8c7eb9f480f3061bc9da41946b1b3eaf3d"} Nov 23 04:36:26 crc kubenswrapper[4751]: I1123 04:36:26.069048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" event={"ID":"8d72beb8-693c-4168-99d2-219a12911413","Type":"ContainerStarted","Data":"9ba9f107b151d147cbe076923548bf5150d16c468be23c1ee3184790de3cfcb1"} Nov 23 04:36:26 crc kubenswrapper[4751]: I1123 04:36:26.105586 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" podStartSLOduration=1.5222023340000002 podStartE2EDuration="2.105518651s" podCreationTimestamp="2025-11-23 04:36:24 +0000 UTC" firstStartedPulling="2025-11-23 04:36:24.913062477 +0000 UTC m=+2481.106733836" lastFinishedPulling="2025-11-23 04:36:25.496378764 +0000 UTC m=+2481.690050153" observedRunningTime="2025-11-23 04:36:26.089060909 +0000 UTC m=+2482.282732308" watchObservedRunningTime="2025-11-23 04:36:26.105518651 +0000 UTC m=+2482.299190050" Nov 23 04:36:33 crc kubenswrapper[4751]: I1123 04:36:33.644419 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:36:33 crc kubenswrapper[4751]: E1123 04:36:33.645216 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:36:45 crc kubenswrapper[4751]: I1123 04:36:45.646919 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:36:46 crc kubenswrapper[4751]: I1123 04:36:46.299496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed"} Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.430674 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.433903 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.450920 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.512439 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5qm\" (UniqueName: \"kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.512525 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.512764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.614468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh5qm\" (UniqueName: \"kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.614850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.615073 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.615401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.615705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.639669 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh5qm\" (UniqueName: \"kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm\") pod \"community-operators-76ttx\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:02 crc kubenswrapper[4751]: I1123 04:37:02.770507 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:03 crc kubenswrapper[4751]: I1123 04:37:03.271767 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:03 crc kubenswrapper[4751]: W1123 04:37:03.281071 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0a06be4_e2c0_4f69_a4ec_556db3063596.slice/crio-9a8e7ec89167eee292aa733480d54b2e77a87a049ceb12962ae5a6daeaa1d9a7 WatchSource:0}: Error finding container 9a8e7ec89167eee292aa733480d54b2e77a87a049ceb12962ae5a6daeaa1d9a7: Status 404 returned error can't find the container with id 9a8e7ec89167eee292aa733480d54b2e77a87a049ceb12962ae5a6daeaa1d9a7 Nov 23 04:37:03 crc kubenswrapper[4751]: I1123 04:37:03.487413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerStarted","Data":"9a8e7ec89167eee292aa733480d54b2e77a87a049ceb12962ae5a6daeaa1d9a7"} Nov 23 04:37:04 crc kubenswrapper[4751]: I1123 04:37:04.503550 4751 generic.go:334] "Generic (PLEG): container finished" podID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerID="90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d" exitCode=0 Nov 23 04:37:04 crc kubenswrapper[4751]: I1123 04:37:04.503756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerDied","Data":"90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d"} Nov 23 04:37:05 crc kubenswrapper[4751]: I1123 04:37:05.518058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerStarted","Data":"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64"} Nov 23 04:37:06 crc kubenswrapper[4751]: I1123 04:37:06.534487 4751 generic.go:334] "Generic (PLEG): container finished" podID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerID="93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64" exitCode=0 Nov 23 04:37:06 crc kubenswrapper[4751]: I1123 04:37:06.534819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerDied","Data":"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64"} Nov 23 04:37:07 crc kubenswrapper[4751]: I1123 04:37:07.547640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerStarted","Data":"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c"} Nov 23 04:37:07 crc kubenswrapper[4751]: I1123 04:37:07.568952 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-76ttx" podStartSLOduration=3.112822418 podStartE2EDuration="5.568933514s" podCreationTimestamp="2025-11-23 04:37:02 +0000 UTC" firstStartedPulling="2025-11-23 04:37:04.5105993 +0000 UTC m=+2520.704270699" lastFinishedPulling="2025-11-23 04:37:06.966710426 +0000 UTC m=+2523.160381795" observedRunningTime="2025-11-23 04:37:07.566893974 +0000 UTC m=+2523.760565333" watchObservedRunningTime="2025-11-23 04:37:07.568933514 +0000 UTC m=+2523.762604873" Nov 23 04:37:12 crc kubenswrapper[4751]: I1123 04:37:12.770779 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:12 crc kubenswrapper[4751]: I1123 04:37:12.771510 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:12 crc kubenswrapper[4751]: I1123 04:37:12.839911 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:13 crc kubenswrapper[4751]: I1123 04:37:13.708898 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:13 crc kubenswrapper[4751]: I1123 04:37:13.771502 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:15 crc kubenswrapper[4751]: I1123 04:37:15.640070 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-76ttx" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="registry-server" containerID="cri-o://c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c" gracePeriod=2 Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.353073 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.409639 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh5qm\" (UniqueName: \"kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm\") pod \"d0a06be4-e2c0-4f69-a4ec-556db3063596\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.409746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content\") pod \"d0a06be4-e2c0-4f69-a4ec-556db3063596\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.409786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities\") pod \"d0a06be4-e2c0-4f69-a4ec-556db3063596\" (UID: \"d0a06be4-e2c0-4f69-a4ec-556db3063596\") " Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.411260 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities" (OuterVolumeSpecName: "utilities") pod "d0a06be4-e2c0-4f69-a4ec-556db3063596" (UID: "d0a06be4-e2c0-4f69-a4ec-556db3063596"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.417659 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm" (OuterVolumeSpecName: "kube-api-access-hh5qm") pod "d0a06be4-e2c0-4f69-a4ec-556db3063596" (UID: "d0a06be4-e2c0-4f69-a4ec-556db3063596"). InnerVolumeSpecName "kube-api-access-hh5qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.512563 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh5qm\" (UniqueName: \"kubernetes.io/projected/d0a06be4-e2c0-4f69-a4ec-556db3063596-kube-api-access-hh5qm\") on node \"crc\" DevicePath \"\"" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.512602 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.603393 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0a06be4-e2c0-4f69-a4ec-556db3063596" (UID: "d0a06be4-e2c0-4f69-a4ec-556db3063596"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.614451 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a06be4-e2c0-4f69-a4ec-556db3063596-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.666155 4751 generic.go:334] "Generic (PLEG): container finished" podID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerID="c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c" exitCode=0 Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.666258 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76ttx" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.673408 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerDied","Data":"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c"} Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.673464 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76ttx" event={"ID":"d0a06be4-e2c0-4f69-a4ec-556db3063596","Type":"ContainerDied","Data":"9a8e7ec89167eee292aa733480d54b2e77a87a049ceb12962ae5a6daeaa1d9a7"} Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.673488 4751 scope.go:117] "RemoveContainer" containerID="c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.703710 4751 scope.go:117] "RemoveContainer" containerID="93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.707073 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.714984 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-76ttx"] Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.731372 4751 scope.go:117] "RemoveContainer" containerID="90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.781646 4751 scope.go:117] "RemoveContainer" containerID="c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c" Nov 23 04:37:16 crc kubenswrapper[4751]: E1123 04:37:16.782144 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c\": container with ID starting with c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c not found: ID does not exist" containerID="c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.782217 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c"} err="failed to get container status \"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c\": rpc error: code = NotFound desc = could not find container \"c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c\": container with ID starting with c8b95747c6cce2de8cffdb2915ee3898174ff55dccc494191ecfcdb606b8f07c not found: ID does not exist" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.782258 4751 scope.go:117] "RemoveContainer" containerID="93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64" Nov 23 04:37:16 crc kubenswrapper[4751]: E1123 04:37:16.788704 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64\": container with ID starting with 93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64 not found: ID does not exist" containerID="93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.788742 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64"} err="failed to get container status \"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64\": rpc error: code = NotFound desc = could not find container \"93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64\": container with ID starting with 93d1ae16c9983897e1113a9e07c4388ef88e4200ddcf61e5607dd6fe51fb6c64 not found: ID does not exist" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.788766 4751 scope.go:117] "RemoveContainer" containerID="90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d" Nov 23 04:37:16 crc kubenswrapper[4751]: E1123 04:37:16.789101 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d\": container with ID starting with 90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d not found: ID does not exist" containerID="90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d" Nov 23 04:37:16 crc kubenswrapper[4751]: I1123 04:37:16.789122 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d"} err="failed to get container status \"90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d\": rpc error: code = NotFound desc = could not find container \"90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d\": container with ID starting with 90524ed393546a2cb4b70039a05a3fce2d0e6b094eb88c0fa40516f4ed10ed0d not found: ID does not exist" Nov 23 04:37:18 crc kubenswrapper[4751]: I1123 04:37:18.678561 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" path="/var/lib/kubelet/pods/d0a06be4-e2c0-4f69-a4ec-556db3063596/volumes" Nov 23 04:39:02 crc kubenswrapper[4751]: I1123 04:39:02.921721 4751 generic.go:334] "Generic (PLEG): container finished" podID="8d72beb8-693c-4168-99d2-219a12911413" containerID="9ba9f107b151d147cbe076923548bf5150d16c468be23c1ee3184790de3cfcb1" exitCode=0 Nov 23 04:39:02 crc kubenswrapper[4751]: I1123 04:39:02.921851 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" event={"ID":"8d72beb8-693c-4168-99d2-219a12911413","Type":"ContainerDied","Data":"9ba9f107b151d147cbe076923548bf5150d16c468be23c1ee3184790de3cfcb1"} Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.386169 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.533629 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.533716 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.533789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7zt4\" (UniqueName: \"kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.533866 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.533923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.534023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.534125 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2\") pod \"8d72beb8-693c-4168-99d2-219a12911413\" (UID: \"8d72beb8-693c-4168-99d2-219a12911413\") " Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.539300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4" (OuterVolumeSpecName: "kube-api-access-c7zt4") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "kube-api-access-c7zt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.539855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.566006 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory" (OuterVolumeSpecName: "inventory") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.578971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.584390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.587137 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.596572 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8d72beb8-693c-4168-99d2-219a12911413" (UID: "8d72beb8-693c-4168-99d2-219a12911413"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636767 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7zt4\" (UniqueName: \"kubernetes.io/projected/8d72beb8-693c-4168-99d2-219a12911413-kube-api-access-c7zt4\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636804 4751 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636817 4751 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636830 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636843 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636855 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.636867 4751 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8d72beb8-693c-4168-99d2-219a12911413-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.947163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" event={"ID":"8d72beb8-693c-4168-99d2-219a12911413","Type":"ContainerDied","Data":"b97bc3d909ff942a03df6d2d69a43e8c7eb9f480f3061bc9da41946b1b3eaf3d"} Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.947479 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97bc3d909ff942a03df6d2d69a43e8c7eb9f480f3061bc9da41946b1b3eaf3d" Nov 23 04:39:04 crc kubenswrapper[4751]: I1123 04:39:04.947291 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn" Nov 23 04:39:08 crc kubenswrapper[4751]: I1123 04:39:08.115083 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:39:08 crc kubenswrapper[4751]: I1123 04:39:08.116499 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:39:38 crc kubenswrapper[4751]: I1123 04:39:38.114773 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:39:38 crc kubenswrapper[4751]: I1123 04:39:38.115561 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.189948 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 04:39:49 crc kubenswrapper[4751]: E1123 04:39:49.191009 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="registry-server" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="registry-server" Nov 23 04:39:49 crc kubenswrapper[4751]: E1123 04:39:49.191078 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="extract-content" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="extract-content" Nov 23 04:39:49 crc kubenswrapper[4751]: E1123 04:39:49.191151 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d72beb8-693c-4168-99d2-219a12911413" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191169 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d72beb8-693c-4168-99d2-219a12911413" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 04:39:49 crc kubenswrapper[4751]: E1123 04:39:49.191389 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="extract-utilities" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191403 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="extract-utilities" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191659 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d72beb8-693c-4168-99d2-219a12911413" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.191677 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a06be4-e2c0-4f69-a4ec-556db3063596" containerName="registry-server" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.192380 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.193948 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w2rjq" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.195387 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.196673 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.196677 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.198605 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266065 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfv8f\" (UniqueName: \"kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266166 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266237 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266274 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266324 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266374 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.266485 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.368292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.368701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.368851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.368981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfv8f\" (UniqueName: \"kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369402 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369593 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.370004 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.369674 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.370332 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.370822 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.370836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.376001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.376055 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.388784 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.399572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfv8f\" (UniqueName: \"kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.408593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.515971 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 04:39:49 crc kubenswrapper[4751]: I1123 04:39:49.997554 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 04:39:50 crc kubenswrapper[4751]: I1123 04:39:50.009535 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:39:50 crc kubenswrapper[4751]: I1123 04:39:50.469119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"941e3bda-6f4a-481b-bb73-1c531d70607e","Type":"ContainerStarted","Data":"4a365d5da6bbe61f129ecc96f16cdc08cd153a04597adc9147c94b94dda4e72a"} Nov 23 04:39:56 crc kubenswrapper[4751]: I1123 04:39:56.961226 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:39:56 crc kubenswrapper[4751]: I1123 04:39:56.963588 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:56 crc kubenswrapper[4751]: I1123 04:39:56.982798 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.048613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kblvt\" (UniqueName: \"kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.048687 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.049053 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.151180 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.151293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.151383 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kblvt\" (UniqueName: \"kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.151818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.151872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.172740 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kblvt\" (UniqueName: \"kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt\") pod \"certified-operators-nns7b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.295339 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:39:57 crc kubenswrapper[4751]: I1123 04:39:57.876253 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:39:57 crc kubenswrapper[4751]: W1123 04:39:57.888099 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b7b5d8_c327_4a19_be48_92a020d4839b.slice/crio-c3b8bd8c2a8ed1896b2a81d6ecf4d294b42a8f7a32ad52a062d187c7476c1bd2 WatchSource:0}: Error finding container c3b8bd8c2a8ed1896b2a81d6ecf4d294b42a8f7a32ad52a062d187c7476c1bd2: Status 404 returned error can't find the container with id c3b8bd8c2a8ed1896b2a81d6ecf4d294b42a8f7a32ad52a062d187c7476c1bd2 Nov 23 04:39:58 crc kubenswrapper[4751]: I1123 04:39:58.559931 4751 generic.go:334] "Generic (PLEG): container finished" podID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerID="7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976" exitCode=0 Nov 23 04:39:58 crc kubenswrapper[4751]: I1123 04:39:58.559969 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerDied","Data":"7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976"} Nov 23 04:39:58 crc kubenswrapper[4751]: I1123 04:39:58.559993 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerStarted","Data":"c3b8bd8c2a8ed1896b2a81d6ecf4d294b42a8f7a32ad52a062d187c7476c1bd2"} Nov 23 04:39:59 crc kubenswrapper[4751]: I1123 04:39:59.577233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerStarted","Data":"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e"} Nov 23 04:40:00 crc kubenswrapper[4751]: I1123 04:40:00.589855 4751 generic.go:334] "Generic (PLEG): container finished" podID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerID="51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e" exitCode=0 Nov 23 04:40:00 crc kubenswrapper[4751]: I1123 04:40:00.590146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerDied","Data":"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e"} Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.114607 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.115227 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.115262 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.117255 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.117319 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed" gracePeriod=600 Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.674461 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed" exitCode=0 Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.674508 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed"} Nov 23 04:40:08 crc kubenswrapper[4751]: I1123 04:40:08.674560 4751 scope.go:117] "RemoveContainer" containerID="f957e4ed302768776ada856b5a361fff652c21c507d06223305c359ff34a2d4b" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.582218 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.584392 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.594698 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.719012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nlr\" (UniqueName: \"kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.719082 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.719147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.823034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.823152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78nlr\" (UniqueName: \"kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.823204 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.823668 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.823922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.848248 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78nlr\" (UniqueName: \"kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr\") pod \"redhat-operators-jtwrs\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:21 crc kubenswrapper[4751]: I1123 04:40:21.908163 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:35 crc kubenswrapper[4751]: E1123 04:40:35.816274 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 23 04:40:35 crc kubenswrapper[4751]: E1123 04:40:35.817124 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfv8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(941e3bda-6f4a-481b-bb73-1c531d70607e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 04:40:35 crc kubenswrapper[4751]: E1123 04:40:35.818199 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="941e3bda-6f4a-481b-bb73-1c531d70607e" Nov 23 04:40:35 crc kubenswrapper[4751]: E1123 04:40:35.980578 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="941e3bda-6f4a-481b-bb73-1c531d70607e" Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.233339 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.988376 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013"} Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.992534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerStarted","Data":"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0"} Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.995006 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerID="cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322" exitCode=0 Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.995106 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerDied","Data":"cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322"} Nov 23 04:40:36 crc kubenswrapper[4751]: I1123 04:40:36.995159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerStarted","Data":"790647c9bc0412709a9f9bfb2320814eaa215ff3e3b89fd0240c6e2fb59ffeb7"} Nov 23 04:40:37 crc kubenswrapper[4751]: I1123 04:40:37.038387 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nns7b" podStartSLOduration=3.844788541 podStartE2EDuration="41.038331984s" podCreationTimestamp="2025-11-23 04:39:56 +0000 UTC" firstStartedPulling="2025-11-23 04:39:58.564146591 +0000 UTC m=+2694.757817970" lastFinishedPulling="2025-11-23 04:40:35.757690034 +0000 UTC m=+2731.951361413" observedRunningTime="2025-11-23 04:40:37.031074615 +0000 UTC m=+2733.224746004" watchObservedRunningTime="2025-11-23 04:40:37.038331984 +0000 UTC m=+2733.232003383" Nov 23 04:40:37 crc kubenswrapper[4751]: I1123 04:40:37.295533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:40:37 crc kubenswrapper[4751]: I1123 04:40:37.295796 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:40:38 crc kubenswrapper[4751]: I1123 04:40:38.361098 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nns7b" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" probeResult="failure" output=< Nov 23 04:40:38 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:40:38 crc kubenswrapper[4751]: > Nov 23 04:40:39 crc kubenswrapper[4751]: I1123 04:40:39.020217 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerID="0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3" exitCode=0 Nov 23 04:40:39 crc kubenswrapper[4751]: I1123 04:40:39.022648 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerDied","Data":"0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3"} Nov 23 04:40:42 crc kubenswrapper[4751]: I1123 04:40:42.054809 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerStarted","Data":"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6"} Nov 23 04:40:42 crc kubenswrapper[4751]: I1123 04:40:42.075858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtwrs" podStartSLOduration=16.88376558 podStartE2EDuration="21.075843169s" podCreationTimestamp="2025-11-23 04:40:21 +0000 UTC" firstStartedPulling="2025-11-23 04:40:36.999823341 +0000 UTC m=+2733.193494740" lastFinishedPulling="2025-11-23 04:40:41.19190095 +0000 UTC m=+2737.385572329" observedRunningTime="2025-11-23 04:40:42.073079017 +0000 UTC m=+2738.266750376" watchObservedRunningTime="2025-11-23 04:40:42.075843169 +0000 UTC m=+2738.269514528" Nov 23 04:40:48 crc kubenswrapper[4751]: I1123 04:40:48.363933 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nns7b" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" probeResult="failure" output=< Nov 23 04:40:48 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:40:48 crc kubenswrapper[4751]: > Nov 23 04:40:49 crc kubenswrapper[4751]: I1123 04:40:49.324264 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 04:40:51 crc kubenswrapper[4751]: I1123 04:40:51.161976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"941e3bda-6f4a-481b-bb73-1c531d70607e","Type":"ContainerStarted","Data":"73091f5e7ff988bb2655e3c5fb52a12762135e6e0ae47527bda82a1997437224"} Nov 23 04:40:51 crc kubenswrapper[4751]: I1123 04:40:51.190814 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.878127697 podStartE2EDuration="1m3.190796898s" podCreationTimestamp="2025-11-23 04:39:48 +0000 UTC" firstStartedPulling="2025-11-23 04:39:50.009289749 +0000 UTC m=+2686.202961108" lastFinishedPulling="2025-11-23 04:40:49.32195895 +0000 UTC m=+2745.515630309" observedRunningTime="2025-11-23 04:40:51.187065421 +0000 UTC m=+2747.380736810" watchObservedRunningTime="2025-11-23 04:40:51.190796898 +0000 UTC m=+2747.384468257" Nov 23 04:40:51 crc kubenswrapper[4751]: I1123 04:40:51.908505 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:51 crc kubenswrapper[4751]: I1123 04:40:51.909466 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:40:52 crc kubenswrapper[4751]: I1123 04:40:52.977776 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtwrs" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" probeResult="failure" output=< Nov 23 04:40:52 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:40:52 crc kubenswrapper[4751]: > Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.430982 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.434331 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.446846 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.507879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.507988 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v768q\" (UniqueName: \"kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.508036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.610100 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.610183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v768q\" (UniqueName: \"kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.610226 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.610802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.611021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.632432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v768q\" (UniqueName: \"kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q\") pod \"redhat-marketplace-zkbm9\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:54 crc kubenswrapper[4751]: I1123 04:40:54.766779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:40:55 crc kubenswrapper[4751]: I1123 04:40:55.278962 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:40:55 crc kubenswrapper[4751]: W1123 04:40:55.284488 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82a6ef41_ba02_47a3_b710_7f4bcd9104d1.slice/crio-a07ce6a0047352f31a46699019de8ed8c3e0242a09aa6f5d866845a0cc83c02b WatchSource:0}: Error finding container a07ce6a0047352f31a46699019de8ed8c3e0242a09aa6f5d866845a0cc83c02b: Status 404 returned error can't find the container with id a07ce6a0047352f31a46699019de8ed8c3e0242a09aa6f5d866845a0cc83c02b Nov 23 04:40:56 crc kubenswrapper[4751]: I1123 04:40:56.232812 4751 generic.go:334] "Generic (PLEG): container finished" podID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerID="966baadfac93dee88248f54b14fa30e4512892b50f5f7f2eb1c3859ec6054f9e" exitCode=0 Nov 23 04:40:56 crc kubenswrapper[4751]: I1123 04:40:56.233101 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerDied","Data":"966baadfac93dee88248f54b14fa30e4512892b50f5f7f2eb1c3859ec6054f9e"} Nov 23 04:40:56 crc kubenswrapper[4751]: I1123 04:40:56.233135 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerStarted","Data":"a07ce6a0047352f31a46699019de8ed8c3e0242a09aa6f5d866845a0cc83c02b"} Nov 23 04:40:57 crc kubenswrapper[4751]: I1123 04:40:57.243616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerStarted","Data":"d738c9ce77b2a51ceee122cf5e536d9566f2a423d131c225a88d6eebc1516556"} Nov 23 04:40:57 crc kubenswrapper[4751]: I1123 04:40:57.373529 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:40:57 crc kubenswrapper[4751]: I1123 04:40:57.438115 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:40:58 crc kubenswrapper[4751]: I1123 04:40:58.262859 4751 generic.go:334] "Generic (PLEG): container finished" podID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerID="d738c9ce77b2a51ceee122cf5e536d9566f2a423d131c225a88d6eebc1516556" exitCode=0 Nov 23 04:40:58 crc kubenswrapper[4751]: I1123 04:40:58.263078 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerDied","Data":"d738c9ce77b2a51ceee122cf5e536d9566f2a423d131c225a88d6eebc1516556"} Nov 23 04:40:59 crc kubenswrapper[4751]: I1123 04:40:59.197127 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:40:59 crc kubenswrapper[4751]: I1123 04:40:59.274096 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerStarted","Data":"73a0b3f8a12bf41a261d3baaefe1ff80fac965fec797658b3706341fbee9732b"} Nov 23 04:40:59 crc kubenswrapper[4751]: I1123 04:40:59.274296 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nns7b" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" containerID="cri-o://7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0" gracePeriod=2 Nov 23 04:40:59 crc kubenswrapper[4751]: I1123 04:40:59.312137 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zkbm9" podStartSLOduration=2.819513128 podStartE2EDuration="5.31211879s" podCreationTimestamp="2025-11-23 04:40:54 +0000 UTC" firstStartedPulling="2025-11-23 04:40:56.235393987 +0000 UTC m=+2752.429065356" lastFinishedPulling="2025-11-23 04:40:58.727999649 +0000 UTC m=+2754.921671018" observedRunningTime="2025-11-23 04:40:59.306904575 +0000 UTC m=+2755.500575934" watchObservedRunningTime="2025-11-23 04:40:59.31211879 +0000 UTC m=+2755.505790149" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.243504 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.291171 4751 generic.go:334] "Generic (PLEG): container finished" podID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerID="7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0" exitCode=0 Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.292151 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nns7b" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.292100 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerDied","Data":"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0"} Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.292319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nns7b" event={"ID":"37b7b5d8-c327-4a19-be48-92a020d4839b","Type":"ContainerDied","Data":"c3b8bd8c2a8ed1896b2a81d6ecf4d294b42a8f7a32ad52a062d187c7476c1bd2"} Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.292394 4751 scope.go:117] "RemoveContainer" containerID="7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.322543 4751 scope.go:117] "RemoveContainer" containerID="51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.336233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content\") pod \"37b7b5d8-c327-4a19-be48-92a020d4839b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.336426 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities\") pod \"37b7b5d8-c327-4a19-be48-92a020d4839b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.336597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kblvt\" (UniqueName: \"kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt\") pod \"37b7b5d8-c327-4a19-be48-92a020d4839b\" (UID: \"37b7b5d8-c327-4a19-be48-92a020d4839b\") " Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.337153 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities" (OuterVolumeSpecName: "utilities") pod "37b7b5d8-c327-4a19-be48-92a020d4839b" (UID: "37b7b5d8-c327-4a19-be48-92a020d4839b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.342880 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt" (OuterVolumeSpecName: "kube-api-access-kblvt") pod "37b7b5d8-c327-4a19-be48-92a020d4839b" (UID: "37b7b5d8-c327-4a19-be48-92a020d4839b"). InnerVolumeSpecName "kube-api-access-kblvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.347470 4751 scope.go:117] "RemoveContainer" containerID="7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.383585 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37b7b5d8-c327-4a19-be48-92a020d4839b" (UID: "37b7b5d8-c327-4a19-be48-92a020d4839b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.422337 4751 scope.go:117] "RemoveContainer" containerID="7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0" Nov 23 04:41:00 crc kubenswrapper[4751]: E1123 04:41:00.422823 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0\": container with ID starting with 7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0 not found: ID does not exist" containerID="7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.422890 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0"} err="failed to get container status \"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0\": rpc error: code = NotFound desc = could not find container \"7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0\": container with ID starting with 7fad249a807bbb2e90a5f03e0d2ab98eaf66cc4db68793d033b031b1ab9650e0 not found: ID does not exist" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.422915 4751 scope.go:117] "RemoveContainer" containerID="51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e" Nov 23 04:41:00 crc kubenswrapper[4751]: E1123 04:41:00.423616 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e\": container with ID starting with 51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e not found: ID does not exist" containerID="51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.423644 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e"} err="failed to get container status \"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e\": rpc error: code = NotFound desc = could not find container \"51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e\": container with ID starting with 51a7270458feab378be790cf0adca4d26bace0ba52f9f9ace498e0cf16e25e8e not found: ID does not exist" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.423658 4751 scope.go:117] "RemoveContainer" containerID="7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976" Nov 23 04:41:00 crc kubenswrapper[4751]: E1123 04:41:00.424209 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976\": container with ID starting with 7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976 not found: ID does not exist" containerID="7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.424339 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976"} err="failed to get container status \"7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976\": rpc error: code = NotFound desc = could not find container \"7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976\": container with ID starting with 7792a5fed88492c4fb76e9c85ebf62e891dc62fba2c6686b21c4c7396273a976 not found: ID does not exist" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.438938 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kblvt\" (UniqueName: \"kubernetes.io/projected/37b7b5d8-c327-4a19-be48-92a020d4839b-kube-api-access-kblvt\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.438978 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.438989 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b7b5d8-c327-4a19-be48-92a020d4839b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.637528 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:41:00 crc kubenswrapper[4751]: I1123 04:41:00.664830 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nns7b"] Nov 23 04:41:02 crc kubenswrapper[4751]: I1123 04:41:02.654863 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" path="/var/lib/kubelet/pods/37b7b5d8-c327-4a19-be48-92a020d4839b/volumes" Nov 23 04:41:02 crc kubenswrapper[4751]: I1123 04:41:02.957662 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtwrs" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" probeResult="failure" output=< Nov 23 04:41:02 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:41:02 crc kubenswrapper[4751]: > Nov 23 04:41:04 crc kubenswrapper[4751]: I1123 04:41:04.767149 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:04 crc kubenswrapper[4751]: I1123 04:41:04.767843 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:04 crc kubenswrapper[4751]: I1123 04:41:04.840459 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:05 crc kubenswrapper[4751]: I1123 04:41:05.403036 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:05 crc kubenswrapper[4751]: I1123 04:41:05.483587 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:41:07 crc kubenswrapper[4751]: I1123 04:41:07.370579 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zkbm9" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="registry-server" containerID="cri-o://73a0b3f8a12bf41a261d3baaefe1ff80fac965fec797658b3706341fbee9732b" gracePeriod=2 Nov 23 04:41:08 crc kubenswrapper[4751]: I1123 04:41:08.385314 4751 generic.go:334] "Generic (PLEG): container finished" podID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerID="73a0b3f8a12bf41a261d3baaefe1ff80fac965fec797658b3706341fbee9732b" exitCode=0 Nov 23 04:41:08 crc kubenswrapper[4751]: I1123 04:41:08.385412 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerDied","Data":"73a0b3f8a12bf41a261d3baaefe1ff80fac965fec797658b3706341fbee9732b"} Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.611633 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.722373 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities\") pod \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.722443 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v768q\" (UniqueName: \"kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q\") pod \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.722526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content\") pod \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\" (UID: \"82a6ef41-ba02-47a3-b710-7f4bcd9104d1\") " Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.723780 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities" (OuterVolumeSpecName: "utilities") pod "82a6ef41-ba02-47a3-b710-7f4bcd9104d1" (UID: "82a6ef41-ba02-47a3-b710-7f4bcd9104d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.728018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q" (OuterVolumeSpecName: "kube-api-access-v768q") pod "82a6ef41-ba02-47a3-b710-7f4bcd9104d1" (UID: "82a6ef41-ba02-47a3-b710-7f4bcd9104d1"). InnerVolumeSpecName "kube-api-access-v768q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.825425 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:09 crc kubenswrapper[4751]: I1123 04:41:09.825461 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v768q\" (UniqueName: \"kubernetes.io/projected/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-kube-api-access-v768q\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:10 crc kubenswrapper[4751]: I1123 04:41:10.432610 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkbm9" event={"ID":"82a6ef41-ba02-47a3-b710-7f4bcd9104d1","Type":"ContainerDied","Data":"a07ce6a0047352f31a46699019de8ed8c3e0242a09aa6f5d866845a0cc83c02b"} Nov 23 04:41:10 crc kubenswrapper[4751]: I1123 04:41:10.432795 4751 scope.go:117] "RemoveContainer" containerID="73a0b3f8a12bf41a261d3baaefe1ff80fac965fec797658b3706341fbee9732b" Nov 23 04:41:10 crc kubenswrapper[4751]: I1123 04:41:10.433167 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkbm9" Nov 23 04:41:10 crc kubenswrapper[4751]: I1123 04:41:10.475790 4751 scope.go:117] "RemoveContainer" containerID="d738c9ce77b2a51ceee122cf5e536d9566f2a423d131c225a88d6eebc1516556" Nov 23 04:41:10 crc kubenswrapper[4751]: I1123 04:41:10.493321 4751 scope.go:117] "RemoveContainer" containerID="966baadfac93dee88248f54b14fa30e4512892b50f5f7f2eb1c3859ec6054f9e" Nov 23 04:41:11 crc kubenswrapper[4751]: I1123 04:41:11.878800 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82a6ef41-ba02-47a3-b710-7f4bcd9104d1" (UID: "82a6ef41-ba02-47a3-b710-7f4bcd9104d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:11 crc kubenswrapper[4751]: I1123 04:41:11.885129 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82a6ef41-ba02-47a3-b710-7f4bcd9104d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:11 crc kubenswrapper[4751]: I1123 04:41:11.988144 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:41:11 crc kubenswrapper[4751]: I1123 04:41:11.998534 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkbm9"] Nov 23 04:41:12 crc kubenswrapper[4751]: I1123 04:41:12.669661 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" path="/var/lib/kubelet/pods/82a6ef41-ba02-47a3-b710-7f4bcd9104d1/volumes" Nov 23 04:41:12 crc kubenswrapper[4751]: I1123 04:41:12.962682 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtwrs" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" probeResult="failure" output=< Nov 23 04:41:12 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:41:12 crc kubenswrapper[4751]: > Nov 23 04:41:21 crc kubenswrapper[4751]: I1123 04:41:21.986978 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:41:22 crc kubenswrapper[4751]: I1123 04:41:22.072978 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:41:22 crc kubenswrapper[4751]: I1123 04:41:22.802369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:41:23 crc kubenswrapper[4751]: I1123 04:41:23.570506 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtwrs" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" containerID="cri-o://9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6" gracePeriod=2 Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.093679 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.264039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78nlr\" (UniqueName: \"kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr\") pod \"4fb496b9-f1c7-486d-89c4-f76bf048a337\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.264184 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content\") pod \"4fb496b9-f1c7-486d-89c4-f76bf048a337\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.264235 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities\") pod \"4fb496b9-f1c7-486d-89c4-f76bf048a337\" (UID: \"4fb496b9-f1c7-486d-89c4-f76bf048a337\") " Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.265562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities" (OuterVolumeSpecName: "utilities") pod "4fb496b9-f1c7-486d-89c4-f76bf048a337" (UID: "4fb496b9-f1c7-486d-89c4-f76bf048a337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.273266 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.287232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr" (OuterVolumeSpecName: "kube-api-access-78nlr") pod "4fb496b9-f1c7-486d-89c4-f76bf048a337" (UID: "4fb496b9-f1c7-486d-89c4-f76bf048a337"). InnerVolumeSpecName "kube-api-access-78nlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.339649 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fb496b9-f1c7-486d-89c4-f76bf048a337" (UID: "4fb496b9-f1c7-486d-89c4-f76bf048a337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.375317 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fb496b9-f1c7-486d-89c4-f76bf048a337-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.375406 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78nlr\" (UniqueName: \"kubernetes.io/projected/4fb496b9-f1c7-486d-89c4-f76bf048a337-kube-api-access-78nlr\") on node \"crc\" DevicePath \"\"" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.602982 4751 generic.go:334] "Generic (PLEG): container finished" podID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerID="9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6" exitCode=0 Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.603203 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtwrs" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.603227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerDied","Data":"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6"} Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.603488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtwrs" event={"ID":"4fb496b9-f1c7-486d-89c4-f76bf048a337","Type":"ContainerDied","Data":"790647c9bc0412709a9f9bfb2320814eaa215ff3e3b89fd0240c6e2fb59ffeb7"} Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.603524 4751 scope.go:117] "RemoveContainer" containerID="9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.642745 4751 scope.go:117] "RemoveContainer" containerID="0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.664890 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.664950 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtwrs"] Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.675715 4751 scope.go:117] "RemoveContainer" containerID="cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.712794 4751 scope.go:117] "RemoveContainer" containerID="9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6" Nov 23 04:41:24 crc kubenswrapper[4751]: E1123 04:41:24.713149 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6\": container with ID starting with 9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6 not found: ID does not exist" containerID="9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.713182 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6"} err="failed to get container status \"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6\": rpc error: code = NotFound desc = could not find container \"9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6\": container with ID starting with 9f14174afcc2d6e25047a38318ea35e9c57cf7f24326a1fd1fed0efb5d7cefc6 not found: ID does not exist" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.713207 4751 scope.go:117] "RemoveContainer" containerID="0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3" Nov 23 04:41:24 crc kubenswrapper[4751]: E1123 04:41:24.715499 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3\": container with ID starting with 0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3 not found: ID does not exist" containerID="0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.715541 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3"} err="failed to get container status \"0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3\": rpc error: code = NotFound desc = could not find container \"0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3\": container with ID starting with 0320929c0d67a375c47ce372783cdd777cbcf1efb2fd6ed77f9c1d9e40f2f2a3 not found: ID does not exist" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.715568 4751 scope.go:117] "RemoveContainer" containerID="cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322" Nov 23 04:41:24 crc kubenswrapper[4751]: E1123 04:41:24.715853 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322\": container with ID starting with cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322 not found: ID does not exist" containerID="cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322" Nov 23 04:41:24 crc kubenswrapper[4751]: I1123 04:41:24.715872 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322"} err="failed to get container status \"cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322\": rpc error: code = NotFound desc = could not find container \"cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322\": container with ID starting with cd1b3b3428d245f2cde99d91c6bca963abdaa3e3d0a12aa2d7859c490f09c322 not found: ID does not exist" Nov 23 04:41:26 crc kubenswrapper[4751]: I1123 04:41:26.663734 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" path="/var/lib/kubelet/pods/4fb496b9-f1c7-486d-89c4-f76bf048a337/volumes" Nov 23 04:42:38 crc kubenswrapper[4751]: I1123 04:42:38.115203 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:42:38 crc kubenswrapper[4751]: I1123 04:42:38.115816 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:43:08 crc kubenswrapper[4751]: I1123 04:43:08.114513 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:43:08 crc kubenswrapper[4751]: I1123 04:43:08.115109 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:43:38 crc kubenswrapper[4751]: I1123 04:43:38.115123 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:43:38 crc kubenswrapper[4751]: I1123 04:43:38.115933 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:43:38 crc kubenswrapper[4751]: I1123 04:43:38.115993 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:43:38 crc kubenswrapper[4751]: I1123 04:43:38.116884 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:43:38 crc kubenswrapper[4751]: I1123 04:43:38.116956 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" gracePeriod=600 Nov 23 04:43:38 crc kubenswrapper[4751]: E1123 04:43:38.273175 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:43:39 crc kubenswrapper[4751]: I1123 04:43:39.156598 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" exitCode=0 Nov 23 04:43:39 crc kubenswrapper[4751]: I1123 04:43:39.156668 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013"} Nov 23 04:43:39 crc kubenswrapper[4751]: I1123 04:43:39.156722 4751 scope.go:117] "RemoveContainer" containerID="becfcf8f99d13492f3144c84653840ad34458d91fb46c4f04a07b9f380cb6bed" Nov 23 04:43:39 crc kubenswrapper[4751]: I1123 04:43:39.157797 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:43:39 crc kubenswrapper[4751]: E1123 04:43:39.158425 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:43:51 crc kubenswrapper[4751]: I1123 04:43:51.644745 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:43:51 crc kubenswrapper[4751]: E1123 04:43:51.645769 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:44:05 crc kubenswrapper[4751]: I1123 04:44:05.644981 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:44:05 crc kubenswrapper[4751]: E1123 04:44:05.646297 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:44:18 crc kubenswrapper[4751]: I1123 04:44:18.645043 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:44:18 crc kubenswrapper[4751]: E1123 04:44:18.647693 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:44:32 crc kubenswrapper[4751]: I1123 04:44:32.643780 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:44:32 crc kubenswrapper[4751]: E1123 04:44:32.644727 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:44:45 crc kubenswrapper[4751]: I1123 04:44:45.644814 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:44:45 crc kubenswrapper[4751]: E1123 04:44:45.645966 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:44:59 crc kubenswrapper[4751]: I1123 04:44:59.644807 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:44:59 crc kubenswrapper[4751]: E1123 04:44:59.645612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.197210 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw"] Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.197993 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198018 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198030 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198038 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198054 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198063 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198077 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198084 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198104 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198130 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198140 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="extract-content" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198161 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198170 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198185 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198193 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: E1123 04:45:00.198211 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198218 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="extract-utilities" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198455 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="82a6ef41-ba02-47a3-b710-7f4bcd9104d1" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198475 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb496b9-f1c7-486d-89c4-f76bf048a337" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.198499 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b7b5d8-c327-4a19-be48-92a020d4839b" containerName="registry-server" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.199325 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.201752 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.202026 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.205024 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw"] Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.249552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.249608 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.249655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.351517 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.351580 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.351627 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.352545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.359884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.367578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z\") pod \"collect-profiles-29397885-zbqhw\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.548184 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:00 crc kubenswrapper[4751]: I1123 04:45:00.986896 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw"] Nov 23 04:45:01 crc kubenswrapper[4751]: I1123 04:45:01.038178 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" event={"ID":"5a0790b6-c640-4f99-9157-b26c03c741f3","Type":"ContainerStarted","Data":"ffd7f3015a4cf6d9a01d30e0b3dba495a2653a1d7e7b649ff933221db1214d87"} Nov 23 04:45:02 crc kubenswrapper[4751]: I1123 04:45:02.050051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" event={"ID":"5a0790b6-c640-4f99-9157-b26c03c741f3","Type":"ContainerDied","Data":"977d56575ed1d10afe2d0c2800b6be236d0e21c6b493ec5c241016f579a409b5"} Nov 23 04:45:02 crc kubenswrapper[4751]: I1123 04:45:02.050323 4751 generic.go:334] "Generic (PLEG): container finished" podID="5a0790b6-c640-4f99-9157-b26c03c741f3" containerID="977d56575ed1d10afe2d0c2800b6be236d0e21c6b493ec5c241016f579a409b5" exitCode=0 Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.545579 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.710280 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z\") pod \"5a0790b6-c640-4f99-9157-b26c03c741f3\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.710497 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume\") pod \"5a0790b6-c640-4f99-9157-b26c03c741f3\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.710612 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume\") pod \"5a0790b6-c640-4f99-9157-b26c03c741f3\" (UID: \"5a0790b6-c640-4f99-9157-b26c03c741f3\") " Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.714552 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a0790b6-c640-4f99-9157-b26c03c741f3" (UID: "5a0790b6-c640-4f99-9157-b26c03c741f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.731077 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a0790b6-c640-4f99-9157-b26c03c741f3" (UID: "5a0790b6-c640-4f99-9157-b26c03c741f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.731159 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z" (OuterVolumeSpecName: "kube-api-access-7kp2z") pod "5a0790b6-c640-4f99-9157-b26c03c741f3" (UID: "5a0790b6-c640-4f99-9157-b26c03c741f3"). InnerVolumeSpecName "kube-api-access-7kp2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.812971 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kp2z\" (UniqueName: \"kubernetes.io/projected/5a0790b6-c640-4f99-9157-b26c03c741f3-kube-api-access-7kp2z\") on node \"crc\" DevicePath \"\"" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.813029 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a0790b6-c640-4f99-9157-b26c03c741f3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:45:03 crc kubenswrapper[4751]: I1123 04:45:03.813049 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a0790b6-c640-4f99-9157-b26c03c741f3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 04:45:04 crc kubenswrapper[4751]: I1123 04:45:04.077946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" event={"ID":"5a0790b6-c640-4f99-9157-b26c03c741f3","Type":"ContainerDied","Data":"ffd7f3015a4cf6d9a01d30e0b3dba495a2653a1d7e7b649ff933221db1214d87"} Nov 23 04:45:04 crc kubenswrapper[4751]: I1123 04:45:04.077988 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397885-zbqhw" Nov 23 04:45:04 crc kubenswrapper[4751]: I1123 04:45:04.078000 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd7f3015a4cf6d9a01d30e0b3dba495a2653a1d7e7b649ff933221db1214d87" Nov 23 04:45:04 crc kubenswrapper[4751]: I1123 04:45:04.642059 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl"] Nov 23 04:45:04 crc kubenswrapper[4751]: I1123 04:45:04.670517 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397840-vqxkl"] Nov 23 04:45:06 crc kubenswrapper[4751]: I1123 04:45:06.654229 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ea08a3-ec8d-4f23-8448-de8c2266e1ef" path="/var/lib/kubelet/pods/e8ea08a3-ec8d-4f23-8448-de8c2266e1ef/volumes" Nov 23 04:45:12 crc kubenswrapper[4751]: I1123 04:45:12.645079 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:45:12 crc kubenswrapper[4751]: E1123 04:45:12.646477 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:45:21 crc kubenswrapper[4751]: I1123 04:45:21.894560 4751 scope.go:117] "RemoveContainer" containerID="22d5fd62dbfd264ae142854e7fe24446be7cd2aa856c6c50f5b9082fdab75bb6" Nov 23 04:45:27 crc kubenswrapper[4751]: I1123 04:45:27.644053 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:45:27 crc kubenswrapper[4751]: E1123 04:45:27.644976 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:45:40 crc kubenswrapper[4751]: I1123 04:45:40.644563 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:45:40 crc kubenswrapper[4751]: E1123 04:45:40.645866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:45:52 crc kubenswrapper[4751]: I1123 04:45:52.644976 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:45:52 crc kubenswrapper[4751]: E1123 04:45:52.645711 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:46:06 crc kubenswrapper[4751]: I1123 04:46:06.644320 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:46:06 crc kubenswrapper[4751]: E1123 04:46:06.645233 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:46:18 crc kubenswrapper[4751]: I1123 04:46:18.644647 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:46:18 crc kubenswrapper[4751]: E1123 04:46:18.645959 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:46:33 crc kubenswrapper[4751]: I1123 04:46:33.644523 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:46:33 crc kubenswrapper[4751]: E1123 04:46:33.645177 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:46:45 crc kubenswrapper[4751]: I1123 04:46:45.644486 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:46:45 crc kubenswrapper[4751]: E1123 04:46:45.645468 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:46:59 crc kubenswrapper[4751]: I1123 04:46:59.643737 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:46:59 crc kubenswrapper[4751]: E1123 04:46:59.644392 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:47:10 crc kubenswrapper[4751]: I1123 04:47:10.644766 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:47:10 crc kubenswrapper[4751]: E1123 04:47:10.647392 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:47:24 crc kubenswrapper[4751]: I1123 04:47:24.660405 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:47:24 crc kubenswrapper[4751]: E1123 04:47:24.661642 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.772868 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:29 crc kubenswrapper[4751]: E1123 04:47:29.773830 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0790b6-c640-4f99-9157-b26c03c741f3" containerName="collect-profiles" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.773844 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0790b6-c640-4f99-9157-b26c03c741f3" containerName="collect-profiles" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.774050 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0790b6-c640-4f99-9157-b26c03c741f3" containerName="collect-profiles" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.775731 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.792912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.857785 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.857871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw8ct\" (UniqueName: \"kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.858018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.959404 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.959467 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw8ct\" (UniqueName: \"kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.959525 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.959969 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.959972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:29 crc kubenswrapper[4751]: I1123 04:47:29.984574 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw8ct\" (UniqueName: \"kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct\") pod \"community-operators-trkcq\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:30 crc kubenswrapper[4751]: I1123 04:47:30.109628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:30 crc kubenswrapper[4751]: I1123 04:47:30.592143 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:30 crc kubenswrapper[4751]: W1123 04:47:30.594517 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a9d639_e5cc_4be5_8de3_75ebbd8d82b2.slice/crio-75bac8bb88a2308b84caa88c983ba7292e87497fa77bdda642fa9cf23a1d4d3d WatchSource:0}: Error finding container 75bac8bb88a2308b84caa88c983ba7292e87497fa77bdda642fa9cf23a1d4d3d: Status 404 returned error can't find the container with id 75bac8bb88a2308b84caa88c983ba7292e87497fa77bdda642fa9cf23a1d4d3d Nov 23 04:47:30 crc kubenswrapper[4751]: I1123 04:47:30.793003 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerStarted","Data":"75bac8bb88a2308b84caa88c983ba7292e87497fa77bdda642fa9cf23a1d4d3d"} Nov 23 04:47:31 crc kubenswrapper[4751]: I1123 04:47:31.809723 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerID="e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da" exitCode=0 Nov 23 04:47:31 crc kubenswrapper[4751]: I1123 04:47:31.810116 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerDied","Data":"e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da"} Nov 23 04:47:31 crc kubenswrapper[4751]: I1123 04:47:31.816959 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:47:32 crc kubenswrapper[4751]: I1123 04:47:32.823445 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerStarted","Data":"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84"} Nov 23 04:47:33 crc kubenswrapper[4751]: I1123 04:47:33.836256 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerID="555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84" exitCode=0 Nov 23 04:47:33 crc kubenswrapper[4751]: I1123 04:47:33.836320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerDied","Data":"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84"} Nov 23 04:47:34 crc kubenswrapper[4751]: I1123 04:47:34.849593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerStarted","Data":"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56"} Nov 23 04:47:34 crc kubenswrapper[4751]: I1123 04:47:34.878081 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trkcq" podStartSLOduration=3.390135811 podStartE2EDuration="5.878064068s" podCreationTimestamp="2025-11-23 04:47:29 +0000 UTC" firstStartedPulling="2025-11-23 04:47:31.815309135 +0000 UTC m=+3148.008980534" lastFinishedPulling="2025-11-23 04:47:34.303237442 +0000 UTC m=+3150.496908791" observedRunningTime="2025-11-23 04:47:34.871243521 +0000 UTC m=+3151.064914900" watchObservedRunningTime="2025-11-23 04:47:34.878064068 +0000 UTC m=+3151.071735427" Nov 23 04:47:35 crc kubenswrapper[4751]: I1123 04:47:35.643967 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:47:35 crc kubenswrapper[4751]: E1123 04:47:35.644240 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:47:40 crc kubenswrapper[4751]: I1123 04:47:40.110289 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:40 crc kubenswrapper[4751]: I1123 04:47:40.110760 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:40 crc kubenswrapper[4751]: I1123 04:47:40.166503 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:40 crc kubenswrapper[4751]: I1123 04:47:40.946103 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:41 crc kubenswrapper[4751]: I1123 04:47:41.007969 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:42 crc kubenswrapper[4751]: I1123 04:47:42.927382 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trkcq" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="registry-server" containerID="cri-o://5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56" gracePeriod=2 Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.434434 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.476294 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content\") pod \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.476499 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities\") pod \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.476608 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw8ct\" (UniqueName: \"kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct\") pod \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\" (UID: \"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2\") " Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.481056 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities" (OuterVolumeSpecName: "utilities") pod "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" (UID: "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.489099 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct" (OuterVolumeSpecName: "kube-api-access-dw8ct") pod "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" (UID: "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2"). InnerVolumeSpecName "kube-api-access-dw8ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.579044 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.579074 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw8ct\" (UniqueName: \"kubernetes.io/projected/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-kube-api-access-dw8ct\") on node \"crc\" DevicePath \"\"" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.939917 4751 generic.go:334] "Generic (PLEG): container finished" podID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerID="5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56" exitCode=0 Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.939977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerDied","Data":"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56"} Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.940000 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trkcq" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.940014 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trkcq" event={"ID":"66a9d639-e5cc-4be5-8de3-75ebbd8d82b2","Type":"ContainerDied","Data":"75bac8bb88a2308b84caa88c983ba7292e87497fa77bdda642fa9cf23a1d4d3d"} Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.940042 4751 scope.go:117] "RemoveContainer" containerID="5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56" Nov 23 04:47:43 crc kubenswrapper[4751]: I1123 04:47:43.971791 4751 scope.go:117] "RemoveContainer" containerID="555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.004459 4751 scope.go:117] "RemoveContainer" containerID="e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.062394 4751 scope.go:117] "RemoveContainer" containerID="5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56" Nov 23 04:47:44 crc kubenswrapper[4751]: E1123 04:47:44.062879 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56\": container with ID starting with 5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56 not found: ID does not exist" containerID="5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.062954 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56"} err="failed to get container status \"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56\": rpc error: code = NotFound desc = could not find container \"5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56\": container with ID starting with 5acf44e5af32298e10492273cb2c9ccaf409eba72c0f2c558100aa5a18477d56 not found: ID does not exist" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.063014 4751 scope.go:117] "RemoveContainer" containerID="555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84" Nov 23 04:47:44 crc kubenswrapper[4751]: E1123 04:47:44.076046 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84\": container with ID starting with 555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84 not found: ID does not exist" containerID="555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.076102 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84"} err="failed to get container status \"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84\": rpc error: code = NotFound desc = could not find container \"555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84\": container with ID starting with 555e8e42050fa74a19d7c45a28e56cef9d4fe4a874f1a4417464d8eef5452f84 not found: ID does not exist" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.076151 4751 scope.go:117] "RemoveContainer" containerID="e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da" Nov 23 04:47:44 crc kubenswrapper[4751]: E1123 04:47:44.078549 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da\": container with ID starting with e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da not found: ID does not exist" containerID="e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.078576 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da"} err="failed to get container status \"e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da\": rpc error: code = NotFound desc = could not find container \"e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da\": container with ID starting with e9935385247f4a00bd7614d4b35ac77d874d97321412f27091c461934e0c51da not found: ID does not exist" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.167648 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" (UID: "66a9d639-e5cc-4be5-8de3-75ebbd8d82b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.187760 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.287776 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.301724 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trkcq"] Nov 23 04:47:44 crc kubenswrapper[4751]: I1123 04:47:44.660899 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" path="/var/lib/kubelet/pods/66a9d639-e5cc-4be5-8de3-75ebbd8d82b2/volumes" Nov 23 04:47:47 crc kubenswrapper[4751]: I1123 04:47:47.644329 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:47:47 crc kubenswrapper[4751]: E1123 04:47:47.645299 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:47:58 crc kubenswrapper[4751]: I1123 04:47:58.644951 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:47:58 crc kubenswrapper[4751]: E1123 04:47:58.646141 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:48:09 crc kubenswrapper[4751]: I1123 04:48:09.644829 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:48:09 crc kubenswrapper[4751]: E1123 04:48:09.646951 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:48:21 crc kubenswrapper[4751]: I1123 04:48:21.644518 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:48:21 crc kubenswrapper[4751]: E1123 04:48:21.645395 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:48:35 crc kubenswrapper[4751]: I1123 04:48:35.644402 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:48:35 crc kubenswrapper[4751]: E1123 04:48:35.645267 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:48:50 crc kubenswrapper[4751]: I1123 04:48:50.644919 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:48:51 crc kubenswrapper[4751]: I1123 04:48:51.723456 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1"} Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.861621 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:42 crc kubenswrapper[4751]: E1123 04:50:42.862554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="registry-server" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.862569 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="registry-server" Nov 23 04:50:42 crc kubenswrapper[4751]: E1123 04:50:42.862603 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="extract-content" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.862613 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="extract-content" Nov 23 04:50:42 crc kubenswrapper[4751]: E1123 04:50:42.862627 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="extract-utilities" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.862635 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="extract-utilities" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.862876 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a9d639-e5cc-4be5-8de3-75ebbd8d82b2" containerName="registry-server" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.865679 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.888364 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.946622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwsw\" (UniqueName: \"kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.946894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:42 crc kubenswrapper[4751]: I1123 04:50:42.947291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.049808 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwsw\" (UniqueName: \"kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.049945 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.050078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.050695 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.050696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.082622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwsw\" (UniqueName: \"kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw\") pod \"certified-operators-5wrns\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.195594 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.736792 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:43 crc kubenswrapper[4751]: I1123 04:50:43.933702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerStarted","Data":"dcf67594b75a5cebdcec11aa362588e410f8108ffdc609b57ca2822d8be976ec"} Nov 23 04:50:44 crc kubenswrapper[4751]: I1123 04:50:44.947218 4751 generic.go:334] "Generic (PLEG): container finished" podID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerID="ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f" exitCode=0 Nov 23 04:50:44 crc kubenswrapper[4751]: I1123 04:50:44.947287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerDied","Data":"ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f"} Nov 23 04:50:45 crc kubenswrapper[4751]: I1123 04:50:45.960591 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerStarted","Data":"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a"} Nov 23 04:50:46 crc kubenswrapper[4751]: I1123 04:50:46.974606 4751 generic.go:334] "Generic (PLEG): container finished" podID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerID="3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a" exitCode=0 Nov 23 04:50:46 crc kubenswrapper[4751]: I1123 04:50:46.974693 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerDied","Data":"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a"} Nov 23 04:50:48 crc kubenswrapper[4751]: I1123 04:50:48.004520 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerStarted","Data":"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560"} Nov 23 04:50:48 crc kubenswrapper[4751]: I1123 04:50:48.034119 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wrns" podStartSLOduration=3.4115028020000002 podStartE2EDuration="6.034099591s" podCreationTimestamp="2025-11-23 04:50:42 +0000 UTC" firstStartedPulling="2025-11-23 04:50:44.949932241 +0000 UTC m=+3341.143603640" lastFinishedPulling="2025-11-23 04:50:47.57252903 +0000 UTC m=+3343.766200429" observedRunningTime="2025-11-23 04:50:48.026684008 +0000 UTC m=+3344.220355367" watchObservedRunningTime="2025-11-23 04:50:48.034099591 +0000 UTC m=+3344.227770950" Nov 23 04:50:53 crc kubenswrapper[4751]: I1123 04:50:53.196813 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:53 crc kubenswrapper[4751]: I1123 04:50:53.199249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:53 crc kubenswrapper[4751]: I1123 04:50:53.251839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:54 crc kubenswrapper[4751]: I1123 04:50:54.138489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:54 crc kubenswrapper[4751]: I1123 04:50:54.193538 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.093573 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wrns" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="registry-server" containerID="cri-o://588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560" gracePeriod=2 Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.647032 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.763839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities\") pod \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.763968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwsw\" (UniqueName: \"kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw\") pod \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.764016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content\") pod \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\" (UID: \"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a\") " Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.764917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities" (OuterVolumeSpecName: "utilities") pod "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" (UID: "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.770096 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw" (OuterVolumeSpecName: "kube-api-access-htwsw") pod "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" (UID: "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a"). InnerVolumeSpecName "kube-api-access-htwsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.771498 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:50:56 crc kubenswrapper[4751]: I1123 04:50:56.771545 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwsw\" (UniqueName: \"kubernetes.io/projected/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-kube-api-access-htwsw\") on node \"crc\" DevicePath \"\"" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.106307 4751 generic.go:334] "Generic (PLEG): container finished" podID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerID="588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560" exitCode=0 Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.106401 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerDied","Data":"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560"} Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.106703 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wrns" event={"ID":"ea7fe9a2-c25c-4e23-9079-9d9b7deb244a","Type":"ContainerDied","Data":"dcf67594b75a5cebdcec11aa362588e410f8108ffdc609b57ca2822d8be976ec"} Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.106731 4751 scope.go:117] "RemoveContainer" containerID="588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.106477 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wrns" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.146762 4751 scope.go:117] "RemoveContainer" containerID="3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.178037 4751 scope.go:117] "RemoveContainer" containerID="ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.187811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" (UID: "ea7fe9a2-c25c-4e23-9079-9d9b7deb244a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.281087 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.281720 4751 scope.go:117] "RemoveContainer" containerID="588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560" Nov 23 04:50:57 crc kubenswrapper[4751]: E1123 04:50:57.282219 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560\": container with ID starting with 588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560 not found: ID does not exist" containerID="588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.282248 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560"} err="failed to get container status \"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560\": rpc error: code = NotFound desc = could not find container \"588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560\": container with ID starting with 588fd5865d54734b9ad1a759bcfffc9070f6a8a879b29a7f7abf76301692f560 not found: ID does not exist" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.282268 4751 scope.go:117] "RemoveContainer" containerID="3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a" Nov 23 04:50:57 crc kubenswrapper[4751]: E1123 04:50:57.282570 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a\": container with ID starting with 3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a not found: ID does not exist" containerID="3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.282599 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a"} err="failed to get container status \"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a\": rpc error: code = NotFound desc = could not find container \"3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a\": container with ID starting with 3d055351f6cc3efbd3015c1fbbbe04ea49a79866e56d9459c3188af90adea95a not found: ID does not exist" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.282618 4751 scope.go:117] "RemoveContainer" containerID="ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f" Nov 23 04:50:57 crc kubenswrapper[4751]: E1123 04:50:57.282845 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f\": container with ID starting with ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f not found: ID does not exist" containerID="ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.282869 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f"} err="failed to get container status \"ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f\": rpc error: code = NotFound desc = could not find container \"ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f\": container with ID starting with ab78132f8a3d760834492d9fdafb6c0b2b45bad2a8100f5fd41366547e49c10f not found: ID does not exist" Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.450517 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:57 crc kubenswrapper[4751]: I1123 04:50:57.464199 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wrns"] Nov 23 04:50:58 crc kubenswrapper[4751]: I1123 04:50:58.664528 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" path="/var/lib/kubelet/pods/ea7fe9a2-c25c-4e23-9079-9d9b7deb244a/volumes" Nov 23 04:51:08 crc kubenswrapper[4751]: I1123 04:51:08.114699 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:51:08 crc kubenswrapper[4751]: I1123 04:51:08.115373 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.393007 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:10 crc kubenswrapper[4751]: E1123 04:51:10.394799 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="extract-utilities" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.394827 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="extract-utilities" Nov 23 04:51:10 crc kubenswrapper[4751]: E1123 04:51:10.394858 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="registry-server" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.394871 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="registry-server" Nov 23 04:51:10 crc kubenswrapper[4751]: E1123 04:51:10.394893 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="extract-content" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.394906 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="extract-content" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.395252 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7fe9a2-c25c-4e23-9079-9d9b7deb244a" containerName="registry-server" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.397173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.414710 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.558323 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.558391 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd82t\" (UniqueName: \"kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.559567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.661673 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.661909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.661933 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd82t\" (UniqueName: \"kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.662448 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.662508 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.695034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd82t\" (UniqueName: \"kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t\") pod \"redhat-operators-v4dsj\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:10 crc kubenswrapper[4751]: I1123 04:51:10.725521 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:11 crc kubenswrapper[4751]: I1123 04:51:11.195966 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:11 crc kubenswrapper[4751]: I1123 04:51:11.266097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerStarted","Data":"4658f2f5851043ca24aae4dee8ac53ddba2ae8ddcde86b254a9fdbb162e2e65b"} Nov 23 04:51:12 crc kubenswrapper[4751]: I1123 04:51:12.279481 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerID="7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34" exitCode=0 Nov 23 04:51:12 crc kubenswrapper[4751]: I1123 04:51:12.279629 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerDied","Data":"7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34"} Nov 23 04:51:13 crc kubenswrapper[4751]: I1123 04:51:13.295956 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerStarted","Data":"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9"} Nov 23 04:51:15 crc kubenswrapper[4751]: I1123 04:51:15.320862 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerID="5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9" exitCode=0 Nov 23 04:51:15 crc kubenswrapper[4751]: I1123 04:51:15.320984 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerDied","Data":"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9"} Nov 23 04:51:16 crc kubenswrapper[4751]: I1123 04:51:16.335319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerStarted","Data":"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7"} Nov 23 04:51:16 crc kubenswrapper[4751]: I1123 04:51:16.365680 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4dsj" podStartSLOduration=2.914189141 podStartE2EDuration="6.365660692s" podCreationTimestamp="2025-11-23 04:51:10 +0000 UTC" firstStartedPulling="2025-11-23 04:51:12.281523712 +0000 UTC m=+3368.475195071" lastFinishedPulling="2025-11-23 04:51:15.732995253 +0000 UTC m=+3371.926666622" observedRunningTime="2025-11-23 04:51:16.358671051 +0000 UTC m=+3372.552342430" watchObservedRunningTime="2025-11-23 04:51:16.365660692 +0000 UTC m=+3372.559332051" Nov 23 04:51:20 crc kubenswrapper[4751]: I1123 04:51:20.726046 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:20 crc kubenswrapper[4751]: I1123 04:51:20.727070 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:21 crc kubenswrapper[4751]: I1123 04:51:21.799889 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v4dsj" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="registry-server" probeResult="failure" output=< Nov 23 04:51:21 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 04:51:21 crc kubenswrapper[4751]: > Nov 23 04:51:30 crc kubenswrapper[4751]: I1123 04:51:30.810407 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:30 crc kubenswrapper[4751]: I1123 04:51:30.872804 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:31 crc kubenswrapper[4751]: I1123 04:51:31.058561 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:32 crc kubenswrapper[4751]: I1123 04:51:32.500393 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v4dsj" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="registry-server" containerID="cri-o://757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7" gracePeriod=2 Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.105294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.159728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content\") pod \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.159929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd82t\" (UniqueName: \"kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t\") pod \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.159972 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities\") pod \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\" (UID: \"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542\") " Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.161128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities" (OuterVolumeSpecName: "utilities") pod "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" (UID: "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.169545 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t" (OuterVolumeSpecName: "kube-api-access-qd82t") pod "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" (UID: "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542"). InnerVolumeSpecName "kube-api-access-qd82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.263555 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd82t\" (UniqueName: \"kubernetes.io/projected/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-kube-api-access-qd82t\") on node \"crc\" DevicePath \"\"" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.263822 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.282736 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" (UID: "6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.365088 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.517183 4751 generic.go:334] "Generic (PLEG): container finished" podID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerID="757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7" exitCode=0 Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.517253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerDied","Data":"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7"} Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.517314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4dsj" event={"ID":"6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542","Type":"ContainerDied","Data":"4658f2f5851043ca24aae4dee8ac53ddba2ae8ddcde86b254a9fdbb162e2e65b"} Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.517266 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4dsj" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.517378 4751 scope.go:117] "RemoveContainer" containerID="757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.542967 4751 scope.go:117] "RemoveContainer" containerID="5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.586770 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.588294 4751 scope.go:117] "RemoveContainer" containerID="7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.601783 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v4dsj"] Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.628066 4751 scope.go:117] "RemoveContainer" containerID="757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7" Nov 23 04:51:33 crc kubenswrapper[4751]: E1123 04:51:33.628608 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7\": container with ID starting with 757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7 not found: ID does not exist" containerID="757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.628647 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7"} err="failed to get container status \"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7\": rpc error: code = NotFound desc = could not find container \"757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7\": container with ID starting with 757f14f368183c78db6db3f1b5ebe21817db50fe17bbbc84c47038903dec0ae7 not found: ID does not exist" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.628673 4751 scope.go:117] "RemoveContainer" containerID="5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9" Nov 23 04:51:33 crc kubenswrapper[4751]: E1123 04:51:33.629042 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9\": container with ID starting with 5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9 not found: ID does not exist" containerID="5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.629067 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9"} err="failed to get container status \"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9\": rpc error: code = NotFound desc = could not find container \"5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9\": container with ID starting with 5a3134964009024bd1adefa45ce59fe73af426ec483000b3d79c6edb7c868ab9 not found: ID does not exist" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.629083 4751 scope.go:117] "RemoveContainer" containerID="7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34" Nov 23 04:51:33 crc kubenswrapper[4751]: E1123 04:51:33.629336 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34\": container with ID starting with 7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34 not found: ID does not exist" containerID="7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34" Nov 23 04:51:33 crc kubenswrapper[4751]: I1123 04:51:33.629386 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34"} err="failed to get container status \"7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34\": rpc error: code = NotFound desc = could not find container \"7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34\": container with ID starting with 7a0d166c9923358148e02645d3f854171e20bac5102789fca40ecf75d6e00e34 not found: ID does not exist" Nov 23 04:51:34 crc kubenswrapper[4751]: I1123 04:51:34.663618 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" path="/var/lib/kubelet/pods/6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542/volumes" Nov 23 04:51:38 crc kubenswrapper[4751]: I1123 04:51:38.114967 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:51:38 crc kubenswrapper[4751]: I1123 04:51:38.115491 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.479779 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:04 crc kubenswrapper[4751]: E1123 04:52:04.481646 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="extract-content" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.481730 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="extract-content" Nov 23 04:52:04 crc kubenswrapper[4751]: E1123 04:52:04.481788 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="registry-server" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.481848 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="registry-server" Nov 23 04:52:04 crc kubenswrapper[4751]: E1123 04:52:04.481903 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="extract-utilities" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.481957 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="extract-utilities" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.482224 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f2e8cc4-855b-4b7c-bc93-6c07ef6d2542" containerName="registry-server" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.485017 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.503721 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.642161 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6np\" (UniqueName: \"kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.642258 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.642308 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.743605 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.743677 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.743825 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl6np\" (UniqueName: \"kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.744230 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.744311 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.770773 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl6np\" (UniqueName: \"kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np\") pod \"redhat-marketplace-ffjgm\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:04 crc kubenswrapper[4751]: I1123 04:52:04.824370 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:05 crc kubenswrapper[4751]: I1123 04:52:05.342498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:05 crc kubenswrapper[4751]: I1123 04:52:05.868007 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerID="90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d" exitCode=0 Nov 23 04:52:05 crc kubenswrapper[4751]: I1123 04:52:05.868088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerDied","Data":"90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d"} Nov 23 04:52:05 crc kubenswrapper[4751]: I1123 04:52:05.868508 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerStarted","Data":"80a028093d297e8a5c59b56bb5ced43a3292a8c05838c6f099c2c0d32e4af0d4"} Nov 23 04:52:06 crc kubenswrapper[4751]: I1123 04:52:06.889871 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerID="59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db" exitCode=0 Nov 23 04:52:06 crc kubenswrapper[4751]: I1123 04:52:06.890237 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerDied","Data":"59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db"} Nov 23 04:52:07 crc kubenswrapper[4751]: I1123 04:52:07.905265 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerStarted","Data":"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6"} Nov 23 04:52:07 crc kubenswrapper[4751]: I1123 04:52:07.926320 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ffjgm" podStartSLOduration=2.4392333 podStartE2EDuration="3.926304035s" podCreationTimestamp="2025-11-23 04:52:04 +0000 UTC" firstStartedPulling="2025-11-23 04:52:05.870396871 +0000 UTC m=+3422.064068230" lastFinishedPulling="2025-11-23 04:52:07.357467606 +0000 UTC m=+3423.551138965" observedRunningTime="2025-11-23 04:52:07.92339389 +0000 UTC m=+3424.117065259" watchObservedRunningTime="2025-11-23 04:52:07.926304035 +0000 UTC m=+3424.119975394" Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.114205 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.114268 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.114313 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.115148 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.115219 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1" gracePeriod=600 Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.915961 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1" exitCode=0 Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.916167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1"} Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.916655 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9"} Nov 23 04:52:08 crc kubenswrapper[4751]: I1123 04:52:08.916688 4751 scope.go:117] "RemoveContainer" containerID="9c57dd33f186a8481ac0130d2f03fc5f67a8d4b1f386a04fbe23a112a4eca013" Nov 23 04:52:11 crc kubenswrapper[4751]: I1123 04:52:11.963267 4751 generic.go:334] "Generic (PLEG): container finished" podID="941e3bda-6f4a-481b-bb73-1c531d70607e" containerID="73091f5e7ff988bb2655e3c5fb52a12762135e6e0ae47527bda82a1997437224" exitCode=0 Nov 23 04:52:11 crc kubenswrapper[4751]: I1123 04:52:11.963324 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"941e3bda-6f4a-481b-bb73-1c531d70607e","Type":"ContainerDied","Data":"73091f5e7ff988bb2655e3c5fb52a12762135e6e0ae47527bda82a1997437224"} Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.478018 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.637191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfv8f\" (UniqueName: \"kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.637250 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.637308 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.637477 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.637558 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.638332 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.638479 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.638602 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data" (OuterVolumeSpecName: "config-data") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.638987 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.639066 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.639100 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config\") pod \"941e3bda-6f4a-481b-bb73-1c531d70607e\" (UID: \"941e3bda-6f4a-481b-bb73-1c531d70607e\") " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.640038 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.640078 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.644320 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.644508 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f" (OuterVolumeSpecName: "kube-api-access-mfv8f") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "kube-api-access-mfv8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.647745 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.665906 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.672564 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.688192 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.714918 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "941e3bda-6f4a-481b-bb73-1c531d70607e" (UID: "941e3bda-6f4a-481b-bb73-1c531d70607e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742521 4751 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742562 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742576 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742590 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfv8f\" (UniqueName: \"kubernetes.io/projected/941e3bda-6f4a-481b-bb73-1c531d70607e-kube-api-access-mfv8f\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742602 4751 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742614 4751 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/941e3bda-6f4a-481b-bb73-1c531d70607e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.742627 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/941e3bda-6f4a-481b-bb73-1c531d70607e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.770226 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.847145 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.989901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"941e3bda-6f4a-481b-bb73-1c531d70607e","Type":"ContainerDied","Data":"4a365d5da6bbe61f129ecc96f16cdc08cd153a04597adc9147c94b94dda4e72a"} Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.990333 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a365d5da6bbe61f129ecc96f16cdc08cd153a04597adc9147c94b94dda4e72a" Nov 23 04:52:13 crc kubenswrapper[4751]: I1123 04:52:13.990020 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 04:52:14 crc kubenswrapper[4751]: I1123 04:52:14.825189 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:14 crc kubenswrapper[4751]: I1123 04:52:14.825272 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:14 crc kubenswrapper[4751]: I1123 04:52:14.910370 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:15 crc kubenswrapper[4751]: I1123 04:52:15.076277 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:15 crc kubenswrapper[4751]: I1123 04:52:15.160130 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.020563 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ffjgm" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="registry-server" containerID="cri-o://63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6" gracePeriod=2 Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.592598 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.734570 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl6np\" (UniqueName: \"kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np\") pod \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.734791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities\") pod \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.734924 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content\") pod \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\" (UID: \"f0ae38ce-6423-4d0a-a148-6c4f627ce741\") " Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.737254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities" (OuterVolumeSpecName: "utilities") pod "f0ae38ce-6423-4d0a-a148-6c4f627ce741" (UID: "f0ae38ce-6423-4d0a-a148-6c4f627ce741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.749989 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np" (OuterVolumeSpecName: "kube-api-access-vl6np") pod "f0ae38ce-6423-4d0a-a148-6c4f627ce741" (UID: "f0ae38ce-6423-4d0a-a148-6c4f627ce741"). InnerVolumeSpecName "kube-api-access-vl6np". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.759392 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0ae38ce-6423-4d0a-a148-6c4f627ce741" (UID: "f0ae38ce-6423-4d0a-a148-6c4f627ce741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.837868 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.837922 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0ae38ce-6423-4d0a-a148-6c4f627ce741-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:17 crc kubenswrapper[4751]: I1123 04:52:17.837944 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl6np\" (UniqueName: \"kubernetes.io/projected/f0ae38ce-6423-4d0a-a148-6c4f627ce741-kube-api-access-vl6np\") on node \"crc\" DevicePath \"\"" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.034282 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerID="63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6" exitCode=0 Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.034377 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerDied","Data":"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6"} Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.034718 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffjgm" event={"ID":"f0ae38ce-6423-4d0a-a148-6c4f627ce741","Type":"ContainerDied","Data":"80a028093d297e8a5c59b56bb5ced43a3292a8c05838c6f099c2c0d32e4af0d4"} Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.034427 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffjgm" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.034759 4751 scope.go:117] "RemoveContainer" containerID="63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.083555 4751 scope.go:117] "RemoveContainer" containerID="59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.099376 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.109555 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffjgm"] Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.112043 4751 scope.go:117] "RemoveContainer" containerID="90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.179306 4751 scope.go:117] "RemoveContainer" containerID="63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6" Nov 23 04:52:18 crc kubenswrapper[4751]: E1123 04:52:18.179925 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6\": container with ID starting with 63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6 not found: ID does not exist" containerID="63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.179965 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6"} err="failed to get container status \"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6\": rpc error: code = NotFound desc = could not find container \"63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6\": container with ID starting with 63e6b618b9c66f19d24eab20ff608f9f57fc0752990df68e0f95ea47631484f6 not found: ID does not exist" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.179995 4751 scope.go:117] "RemoveContainer" containerID="59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db" Nov 23 04:52:18 crc kubenswrapper[4751]: E1123 04:52:18.180339 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db\": container with ID starting with 59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db not found: ID does not exist" containerID="59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.180453 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db"} err="failed to get container status \"59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db\": rpc error: code = NotFound desc = could not find container \"59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db\": container with ID starting with 59f3037fd1ef1590901ce0f67a76bc687c6aa70c853f43e132f566982bde11db not found: ID does not exist" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.180477 4751 scope.go:117] "RemoveContainer" containerID="90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d" Nov 23 04:52:18 crc kubenswrapper[4751]: E1123 04:52:18.180784 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d\": container with ID starting with 90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d not found: ID does not exist" containerID="90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.180872 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d"} err="failed to get container status \"90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d\": rpc error: code = NotFound desc = could not find container \"90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d\": container with ID starting with 90fc8301182a28484117e2ade11ade13b26d4f5b7198176d2d3a8749f0c2643d not found: ID does not exist" Nov 23 04:52:18 crc kubenswrapper[4751]: I1123 04:52:18.662324 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" path="/var/lib/kubelet/pods/f0ae38ce-6423-4d0a-a148-6c4f627ce741/volumes" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.790505 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 04:52:22 crc kubenswrapper[4751]: E1123 04:52:22.791637 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="extract-content" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.791654 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="extract-content" Nov 23 04:52:22 crc kubenswrapper[4751]: E1123 04:52:22.791674 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941e3bda-6f4a-481b-bb73-1c531d70607e" containerName="tempest-tests-tempest-tests-runner" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.791683 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="941e3bda-6f4a-481b-bb73-1c531d70607e" containerName="tempest-tests-tempest-tests-runner" Nov 23 04:52:22 crc kubenswrapper[4751]: E1123 04:52:22.791704 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="registry-server" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.791711 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="registry-server" Nov 23 04:52:22 crc kubenswrapper[4751]: E1123 04:52:22.791732 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="extract-utilities" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.791740 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="extract-utilities" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.792047 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ae38ce-6423-4d0a-a148-6c4f627ce741" containerName="registry-server" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.792069 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="941e3bda-6f4a-481b-bb73-1c531d70607e" containerName="tempest-tests-tempest-tests-runner" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.793455 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.795678 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-w2rjq" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.801603 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.948045 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgxx\" (UniqueName: \"kubernetes.io/projected/9222b521-983b-458c-b312-411689b31bec-kube-api-access-gzgxx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:22 crc kubenswrapper[4751]: I1123 04:52:22.948464 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.050853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzgxx\" (UniqueName: \"kubernetes.io/projected/9222b521-983b-458c-b312-411689b31bec-kube-api-access-gzgxx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.051049 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.052690 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.087824 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzgxx\" (UniqueName: \"kubernetes.io/projected/9222b521-983b-458c-b312-411689b31bec-kube-api-access-gzgxx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.103936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"9222b521-983b-458c-b312-411689b31bec\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.127080 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 04:52:23 crc kubenswrapper[4751]: I1123 04:52:23.648555 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 04:52:24 crc kubenswrapper[4751]: I1123 04:52:24.111484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9222b521-983b-458c-b312-411689b31bec","Type":"ContainerStarted","Data":"9258f7b4c90c706d108aef0087737b4b1544dc26536b1338347fed1b3646d345"} Nov 23 04:52:25 crc kubenswrapper[4751]: I1123 04:52:25.126423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"9222b521-983b-458c-b312-411689b31bec","Type":"ContainerStarted","Data":"4613ba3fc4e4d750142ed82c45417f970fe1b43471e91bd575c8fb6750bb54db"} Nov 23 04:52:25 crc kubenswrapper[4751]: I1123 04:52:25.157803 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.31718574 podStartE2EDuration="3.157773969s" podCreationTimestamp="2025-11-23 04:52:22 +0000 UTC" firstStartedPulling="2025-11-23 04:52:23.655161981 +0000 UTC m=+3439.848833350" lastFinishedPulling="2025-11-23 04:52:24.49575022 +0000 UTC m=+3440.689421579" observedRunningTime="2025-11-23 04:52:25.145929992 +0000 UTC m=+3441.339601391" watchObservedRunningTime="2025-11-23 04:52:25.157773969 +0000 UTC m=+3441.351445358" Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.918059 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgdbm/must-gather-4gcp9"] Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.922331 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.924322 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pgdbm"/"default-dockercfg-zznsl" Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.925121 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pgdbm"/"openshift-service-ca.crt" Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.925461 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pgdbm"/"kube-root-ca.crt" Nov 23 04:52:47 crc kubenswrapper[4751]: I1123 04:52:47.927662 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pgdbm/must-gather-4gcp9"] Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.021980 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2s4f\" (UniqueName: \"kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.022033 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.124139 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2s4f\" (UniqueName: \"kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.124188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.124869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.149977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2s4f\" (UniqueName: \"kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f\") pod \"must-gather-4gcp9\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.255547 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.767773 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pgdbm/must-gather-4gcp9"] Nov 23 04:52:48 crc kubenswrapper[4751]: W1123 04:52:48.770527 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6631e16e_9bbd_4092_a130_5ba01098f7be.slice/crio-654512d1f19ce90446eaaf0c7b8b46aa1a516aa6866435d8064f0492bc8926e0 WatchSource:0}: Error finding container 654512d1f19ce90446eaaf0c7b8b46aa1a516aa6866435d8064f0492bc8926e0: Status 404 returned error can't find the container with id 654512d1f19ce90446eaaf0c7b8b46aa1a516aa6866435d8064f0492bc8926e0 Nov 23 04:52:48 crc kubenswrapper[4751]: I1123 04:52:48.773628 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:52:49 crc kubenswrapper[4751]: I1123 04:52:49.426178 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" event={"ID":"6631e16e-9bbd-4092-a130-5ba01098f7be","Type":"ContainerStarted","Data":"654512d1f19ce90446eaaf0c7b8b46aa1a516aa6866435d8064f0492bc8926e0"} Nov 23 04:52:55 crc kubenswrapper[4751]: I1123 04:52:55.483532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" event={"ID":"6631e16e-9bbd-4092-a130-5ba01098f7be","Type":"ContainerStarted","Data":"ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb"} Nov 23 04:52:55 crc kubenswrapper[4751]: I1123 04:52:55.484102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" event={"ID":"6631e16e-9bbd-4092-a130-5ba01098f7be","Type":"ContainerStarted","Data":"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6"} Nov 23 04:52:55 crc kubenswrapper[4751]: I1123 04:52:55.515551 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" podStartSLOduration=2.613769079 podStartE2EDuration="8.515523137s" podCreationTimestamp="2025-11-23 04:52:47 +0000 UTC" firstStartedPulling="2025-11-23 04:52:48.773208991 +0000 UTC m=+3464.966880360" lastFinishedPulling="2025-11-23 04:52:54.674963059 +0000 UTC m=+3470.868634418" observedRunningTime="2025-11-23 04:52:55.507078638 +0000 UTC m=+3471.700750027" watchObservedRunningTime="2025-11-23 04:52:55.515523137 +0000 UTC m=+3471.709194536" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.532926 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-658wq"] Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.534701 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.643106 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vrd\" (UniqueName: \"kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.643657 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.745645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vrd\" (UniqueName: \"kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.746006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.746110 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.765653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vrd\" (UniqueName: \"kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd\") pod \"crc-debug-658wq\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:58 crc kubenswrapper[4751]: I1123 04:52:58.860910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:52:59 crc kubenswrapper[4751]: I1123 04:52:59.523624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-658wq" event={"ID":"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4","Type":"ContainerStarted","Data":"e362b633c1aa1ec8de3e8b6020a685f877a1ad4ec60dcb53e0fbc282739106f6"} Nov 23 04:53:09 crc kubenswrapper[4751]: I1123 04:53:09.618214 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-658wq" event={"ID":"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4","Type":"ContainerStarted","Data":"921c5e12aeffc3acfe509c22351017f5995ebd7aa0df69c1a0379b4510de69fa"} Nov 23 04:53:09 crc kubenswrapper[4751]: I1123 04:53:09.635623 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pgdbm/crc-debug-658wq" podStartSLOduration=1.366144377 podStartE2EDuration="11.63561s" podCreationTimestamp="2025-11-23 04:52:58 +0000 UTC" firstStartedPulling="2025-11-23 04:52:58.913920208 +0000 UTC m=+3475.107591567" lastFinishedPulling="2025-11-23 04:53:09.183385831 +0000 UTC m=+3485.377057190" observedRunningTime="2025-11-23 04:53:09.629335507 +0000 UTC m=+3485.823006866" watchObservedRunningTime="2025-11-23 04:53:09.63561 +0000 UTC m=+3485.829281359" Nov 23 04:53:48 crc kubenswrapper[4751]: I1123 04:53:48.994430 4751 generic.go:334] "Generic (PLEG): container finished" podID="2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" containerID="921c5e12aeffc3acfe509c22351017f5995ebd7aa0df69c1a0379b4510de69fa" exitCode=0 Nov 23 04:53:48 crc kubenswrapper[4751]: I1123 04:53:48.994594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-658wq" event={"ID":"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4","Type":"ContainerDied","Data":"921c5e12aeffc3acfe509c22351017f5995ebd7aa0df69c1a0379b4510de69fa"} Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.116458 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.159657 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-658wq"] Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.173529 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-658wq"] Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.231389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host\") pod \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.231490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host" (OuterVolumeSpecName: "host") pod "2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" (UID: "2e25a55a-9196-44ad-98bb-e2dfa8f43ef4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.231550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vrd\" (UniqueName: \"kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd\") pod \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\" (UID: \"2e25a55a-9196-44ad-98bb-e2dfa8f43ef4\") " Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.232035 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-host\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.238576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd" (OuterVolumeSpecName: "kube-api-access-d9vrd") pod "2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" (UID: "2e25a55a-9196-44ad-98bb-e2dfa8f43ef4"). InnerVolumeSpecName "kube-api-access-d9vrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.333698 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vrd\" (UniqueName: \"kubernetes.io/projected/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4-kube-api-access-d9vrd\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:50 crc kubenswrapper[4751]: I1123 04:53:50.667724 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" path="/var/lib/kubelet/pods/2e25a55a-9196-44ad-98bb-e2dfa8f43ef4/volumes" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.019602 4751 scope.go:117] "RemoveContainer" containerID="921c5e12aeffc3acfe509c22351017f5995ebd7aa0df69c1a0379b4510de69fa" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.019674 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-658wq" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.371504 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-n9fg8"] Nov 23 04:53:51 crc kubenswrapper[4751]: E1123 04:53:51.372196 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" containerName="container-00" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.372211 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" containerName="container-00" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.372449 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e25a55a-9196-44ad-98bb-e2dfa8f43ef4" containerName="container-00" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.373173 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.561956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf6d\" (UniqueName: \"kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.562025 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.664097 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqf6d\" (UniqueName: \"kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.664188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.664545 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.687262 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqf6d\" (UniqueName: \"kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d\") pod \"crc-debug-n9fg8\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:51 crc kubenswrapper[4751]: I1123 04:53:51.703387 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:52 crc kubenswrapper[4751]: I1123 04:53:52.034693 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" event={"ID":"af27c125-a4cf-4935-83a2-d68af7456fcf","Type":"ContainerStarted","Data":"0654b6064b52412c0d1694612aa03bb4993ae0e73133b9c4088e02e6a643feeb"} Nov 23 04:53:52 crc kubenswrapper[4751]: I1123 04:53:52.034961 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" event={"ID":"af27c125-a4cf-4935-83a2-d68af7456fcf","Type":"ContainerStarted","Data":"6f0882fcf8f76d85c9f68be433a7eb5c7b8839d923445b0aa50aebc53f9ee867"} Nov 23 04:53:52 crc kubenswrapper[4751]: I1123 04:53:52.066940 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" podStartSLOduration=1.066914002 podStartE2EDuration="1.066914002s" podCreationTimestamp="2025-11-23 04:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 04:53:52.058948345 +0000 UTC m=+3528.252619724" watchObservedRunningTime="2025-11-23 04:53:52.066914002 +0000 UTC m=+3528.260585401" Nov 23 04:53:53 crc kubenswrapper[4751]: I1123 04:53:53.048303 4751 generic.go:334] "Generic (PLEG): container finished" podID="af27c125-a4cf-4935-83a2-d68af7456fcf" containerID="0654b6064b52412c0d1694612aa03bb4993ae0e73133b9c4088e02e6a643feeb" exitCode=0 Nov 23 04:53:53 crc kubenswrapper[4751]: I1123 04:53:53.048488 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" event={"ID":"af27c125-a4cf-4935-83a2-d68af7456fcf","Type":"ContainerDied","Data":"0654b6064b52412c0d1694612aa03bb4993ae0e73133b9c4088e02e6a643feeb"} Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.167485 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.204050 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-n9fg8"] Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.211038 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host\") pod \"af27c125-a4cf-4935-83a2-d68af7456fcf\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.211117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqf6d\" (UniqueName: \"kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d\") pod \"af27c125-a4cf-4935-83a2-d68af7456fcf\" (UID: \"af27c125-a4cf-4935-83a2-d68af7456fcf\") " Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.211173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host" (OuterVolumeSpecName: "host") pod "af27c125-a4cf-4935-83a2-d68af7456fcf" (UID: "af27c125-a4cf-4935-83a2-d68af7456fcf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.211487 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af27c125-a4cf-4935-83a2-d68af7456fcf-host\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.213703 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-n9fg8"] Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.217511 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d" (OuterVolumeSpecName: "kube-api-access-gqf6d") pod "af27c125-a4cf-4935-83a2-d68af7456fcf" (UID: "af27c125-a4cf-4935-83a2-d68af7456fcf"). InnerVolumeSpecName "kube-api-access-gqf6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.312918 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqf6d\" (UniqueName: \"kubernetes.io/projected/af27c125-a4cf-4935-83a2-d68af7456fcf-kube-api-access-gqf6d\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:54 crc kubenswrapper[4751]: I1123 04:53:54.660873 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af27c125-a4cf-4935-83a2-d68af7456fcf" path="/var/lib/kubelet/pods/af27c125-a4cf-4935-83a2-d68af7456fcf/volumes" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.077815 4751 scope.go:117] "RemoveContainer" containerID="0654b6064b52412c0d1694612aa03bb4993ae0e73133b9c4088e02e6a643feeb" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.077913 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-n9fg8" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.387664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-bfb5z"] Nov 23 04:53:55 crc kubenswrapper[4751]: E1123 04:53:55.389242 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af27c125-a4cf-4935-83a2-d68af7456fcf" containerName="container-00" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.389316 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="af27c125-a4cf-4935-83a2-d68af7456fcf" containerName="container-00" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.389592 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="af27c125-a4cf-4935-83a2-d68af7456fcf" containerName="container-00" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.390272 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.437158 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.437419 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgrk\" (UniqueName: \"kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.538870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgrk\" (UniqueName: \"kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.538963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.539090 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.573870 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgrk\" (UniqueName: \"kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk\") pod \"crc-debug-bfb5z\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: I1123 04:53:55.712190 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:55 crc kubenswrapper[4751]: W1123 04:53:55.768884 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b03e8a5_b7dd_403e_854f_2eb569c18f3c.slice/crio-526ab2183d9d882ed50498f7d12f13dbd9e0181423804da522210d30863e00aa WatchSource:0}: Error finding container 526ab2183d9d882ed50498f7d12f13dbd9e0181423804da522210d30863e00aa: Status 404 returned error can't find the container with id 526ab2183d9d882ed50498f7d12f13dbd9e0181423804da522210d30863e00aa Nov 23 04:53:56 crc kubenswrapper[4751]: I1123 04:53:56.094537 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b03e8a5-b7dd-403e-854f-2eb569c18f3c" containerID="16b9e9d0520434d0102c15b67e4cb659e871296760b61191e916a32adb8f1ac6" exitCode=0 Nov 23 04:53:56 crc kubenswrapper[4751]: I1123 04:53:56.094652 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" event={"ID":"3b03e8a5-b7dd-403e-854f-2eb569c18f3c","Type":"ContainerDied","Data":"16b9e9d0520434d0102c15b67e4cb659e871296760b61191e916a32adb8f1ac6"} Nov 23 04:53:56 crc kubenswrapper[4751]: I1123 04:53:56.094863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" event={"ID":"3b03e8a5-b7dd-403e-854f-2eb569c18f3c","Type":"ContainerStarted","Data":"526ab2183d9d882ed50498f7d12f13dbd9e0181423804da522210d30863e00aa"} Nov 23 04:53:56 crc kubenswrapper[4751]: I1123 04:53:56.138340 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-bfb5z"] Nov 23 04:53:56 crc kubenswrapper[4751]: I1123 04:53:56.150304 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgdbm/crc-debug-bfb5z"] Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.209826 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.377016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host\") pod \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.377135 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgrk\" (UniqueName: \"kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk\") pod \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\" (UID: \"3b03e8a5-b7dd-403e-854f-2eb569c18f3c\") " Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.377166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host" (OuterVolumeSpecName: "host") pod "3b03e8a5-b7dd-403e-854f-2eb569c18f3c" (UID: "3b03e8a5-b7dd-403e-854f-2eb569c18f3c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.377787 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-host\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.388634 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk" (OuterVolumeSpecName: "kube-api-access-nfgrk") pod "3b03e8a5-b7dd-403e-854f-2eb569c18f3c" (UID: "3b03e8a5-b7dd-403e-854f-2eb569c18f3c"). InnerVolumeSpecName "kube-api-access-nfgrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:53:57 crc kubenswrapper[4751]: I1123 04:53:57.479896 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgrk\" (UniqueName: \"kubernetes.io/projected/3b03e8a5-b7dd-403e-854f-2eb569c18f3c-kube-api-access-nfgrk\") on node \"crc\" DevicePath \"\"" Nov 23 04:53:58 crc kubenswrapper[4751]: I1123 04:53:58.114025 4751 scope.go:117] "RemoveContainer" containerID="16b9e9d0520434d0102c15b67e4cb659e871296760b61191e916a32adb8f1ac6" Nov 23 04:53:58 crc kubenswrapper[4751]: I1123 04:53:58.114075 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/crc-debug-bfb5z" Nov 23 04:53:58 crc kubenswrapper[4751]: I1123 04:53:58.661552 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b03e8a5-b7dd-403e-854f-2eb569c18f3c" path="/var/lib/kubelet/pods/3b03e8a5-b7dd-403e-854f-2eb569c18f3c/volumes" Nov 23 04:54:08 crc kubenswrapper[4751]: I1123 04:54:08.114874 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:54:08 crc kubenswrapper[4751]: I1123 04:54:08.115605 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.122856 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b8f7cfdb6-q2828_356c133f-02f2-453d-a0a4-018aa4741eee/barbican-api/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.285067 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b8f7cfdb6-q2828_356c133f-02f2-453d-a0a4-018aa4741eee/barbican-api-log/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.354424 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dbdd4878-rj2c4_e290c0f4-7b34-4063-a8f8-aa5123762b03/barbican-keystone-listener/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.401425 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dbdd4878-rj2c4_e290c0f4-7b34-4063-a8f8-aa5123762b03/barbican-keystone-listener-log/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.547194 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6898967-wnj7z_1babe827-384d-4185-90fb-021a93e62b38/barbican-worker-log/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.569822 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6898967-wnj7z_1babe827-384d-4185-90fb-021a93e62b38/barbican-worker/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.729802 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl_ac73cf10-7aa5-4958-9238-d5473d368ceb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.785509 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/ceilometer-central-agent/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.846144 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/ceilometer-notification-agent/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.915415 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/proxy-httpd/0.log" Nov 23 04:54:11 crc kubenswrapper[4751]: I1123 04:54:11.940932 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/sg-core/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.081272 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f/cinder-api-log/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.081797 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f/cinder-api/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.259170 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ba61bff1-41f9-4e95-bde0-0da7b4000a1c/cinder-scheduler/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.284288 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ba61bff1-41f9-4e95-bde0-0da7b4000a1c/probe/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.403310 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9xskg_d8c7c9fe-7d35-413c-8738-31ec126e8d80/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.523895 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x_ddafc7c0-5c18-49f0-b609-f68959f5bc29/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.572946 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/init/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.753915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/init/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.839145 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/dnsmasq-dns/0.log" Nov 23 04:54:12 crc kubenswrapper[4751]: I1123 04:54:12.842079 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-99kcg_5a46c73e-f53a-4bcc-8a3a-d5982ecc6649/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.049070 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_06ebe447-bb04-442d-9fdf-752c1dd5a747/glance-log/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.049972 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_06ebe447-bb04-442d-9fdf-752c1dd5a747/glance-httpd/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.188664 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3c09a9e4-3f1b-4732-9b6b-fcd54fe21650/glance-httpd/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.224481 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3c09a9e4-3f1b-4732-9b6b-fcd54fe21650/glance-log/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.376539 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-789489d584-slcs8_49f1490c-4b27-47c0-bc36-688b467ebe2c/horizon/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.577921 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf_3068980f-3607-43b3-b505-d4663202d8dd/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.751575 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dm6wp_42446795-8c4f-4b34-b87c-63fc5306226e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:13 crc kubenswrapper[4751]: I1123 04:54:13.780509 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-789489d584-slcs8_49f1490c-4b27-47c0-bc36-688b467ebe2c/horizon-log/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.001019 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66b57bb577-p2b4n_bf363ce8-cc62-4c00-90f1-adfe3e26e834/keystone-api/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.034012 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_edbd1755-69d6-4ae1-809a-a64203c0c090/kube-state-metrics/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.188259 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-44fjn_b1b8004d-68f3-41c1-ac68-2b35a527fd88/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.529219 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64b84b8669-6xvhn_61f0356e-5917-45f1-86a3-75f15d10ac71/neutron-api/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.577065 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64b84b8669-6xvhn_61f0356e-5917-45f1-86a3-75f15d10ac71/neutron-httpd/0.log" Nov 23 04:54:14 crc kubenswrapper[4751]: I1123 04:54:14.646673 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9_e8f0c75e-2965-4ab3-841c-aae06611df5a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.065447 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4e65a-c2e7-4040-9230-063782c96cca/nova-api-log/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.130264 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1085be93-49b1-4d78-818d-ef37248136f4/nova-cell0-conductor-conductor/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.275566 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4e65a-c2e7-4040-9230-063782c96cca/nova-api-api/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.356317 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c053622f-478e-4de7-9a6b-43c86c5ada7b/nova-cell1-conductor-conductor/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.389221 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c7b90669-75e1-4f89-860f-7dfa61d6fa48/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.613408 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-4nzvn_88fdef25-3ea0-48cf-8c54-22776698b6dc/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:15 crc kubenswrapper[4751]: I1123 04:54:15.672638 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b98d09fb-41ac-4b07-8334-72a33cf11ba6/nova-metadata-log/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.010739 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/mysql-bootstrap/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.019693 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1d5c19e2-e749-4c94-b8ce-04b9a34b65ff/nova-scheduler-scheduler/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.193460 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/galera/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.242991 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/mysql-bootstrap/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.386019 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/mysql-bootstrap/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.587121 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/galera/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.598574 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/mysql-bootstrap/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.805139 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d8f1f72f-cb69-43a3-8f06-1f348a731330/openstackclient/0.log" Nov 23 04:54:16 crc kubenswrapper[4751]: I1123 04:54:16.878457 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-v5k4s_eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1/openstack-network-exporter/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.001371 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b98d09fb-41ac-4b07-8334-72a33cf11ba6/nova-metadata-metadata/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.065883 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server-init/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.205666 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server-init/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.215238 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovs-vswitchd/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.230334 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.394805 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x65b7_4e8bfa9a-1b92-428e-a443-21ccb190a5bd/ovn-controller/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.473483 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5q5kr_dd88992f-1e56-48ed-913c-4ecd0fc20767/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.580601 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_95cf38ba-5edd-4ff7-9213-966b6498df4e/openstack-network-exporter/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.633437 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_95cf38ba-5edd-4ff7-9213-966b6498df4e/ovn-northd/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.922539 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28/openstack-network-exporter/0.log" Nov 23 04:54:17 crc kubenswrapper[4751]: I1123 04:54:17.935919 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28/ovsdbserver-nb/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.069567 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d49c307d-c4e7-412a-9506-71b93c1a1557/openstack-network-exporter/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.195835 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d49c307d-c4e7-412a-9506-71b93c1a1557/ovsdbserver-sb/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.300321 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d9c6b99fd-4x95v_748e93b6-b72d-4fd1-8542-b37b5d4d7031/placement-api/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.422415 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d9c6b99fd-4x95v_748e93b6-b72d-4fd1-8542-b37b5d4d7031/placement-log/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.429022 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/setup-container/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.583793 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/setup-container/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.684432 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/rabbitmq/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.686916 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/setup-container/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.865432 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/setup-container/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.937965 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/rabbitmq/0.log" Nov 23 04:54:18 crc kubenswrapper[4751]: I1123 04:54:18.974058 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg_be595bce-317a-48c8-949e-2947f0954d0b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.207644 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7c89h_ffccd27d-7f9b-49be-9f33-078fc7cdfe25/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.207859 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6_381c6054-1b64-48db-81d6-12e6a95dcbe2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.404966 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-p922l_98681550-3696-4f63-a16d-edaf78bf06fb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.459953 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9cnpr_53cbfe3d-8559-41aa-8352-5480c56e3624/ssh-known-hosts-edpm-deployment/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.620075 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-755d45c5-9j9lj_69aeb1a6-d144-470d-8b47-f4e0126bd9fb/proxy-server/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.799825 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-d526x_cc27467d-f028-4378-8e74-84b22dbc0048/swift-ring-rebalance/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.800623 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-755d45c5-9j9lj_69aeb1a6-d144-470d-8b47-f4e0126bd9fb/proxy-httpd/0.log" Nov 23 04:54:19 crc kubenswrapper[4751]: I1123 04:54:19.909162 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-auditor/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.047041 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-reaper/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.077440 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-server/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.085151 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-replicator/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.135865 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-auditor/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.226558 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-server/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.273105 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-updater/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.346936 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-replicator/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.352716 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-auditor/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.485283 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-replicator/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.486335 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-expirer/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.561803 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-server/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.562638 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-updater/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.669039 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/swift-recon-cron/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.729753 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/rsync/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.861946 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn_8d72beb8-693c-4168-99d2-219a12911413/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:20 crc kubenswrapper[4751]: I1123 04:54:20.983492 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_941e3bda-6f4a-481b-bb73-1c531d70607e/tempest-tests-tempest-tests-runner/0.log" Nov 23 04:54:21 crc kubenswrapper[4751]: I1123 04:54:21.077397 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9222b521-983b-458c-b312-411689b31bec/test-operator-logs-container/0.log" Nov 23 04:54:21 crc kubenswrapper[4751]: I1123 04:54:21.187857 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v_8e5d5738-2df6-456a-b038-9605e0da3b66/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 04:54:30 crc kubenswrapper[4751]: I1123 04:54:30.879617 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3125267d-8f09-4e74-90e2-a8f85e538b86/memcached/0.log" Nov 23 04:54:38 crc kubenswrapper[4751]: I1123 04:54:38.115237 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:54:38 crc kubenswrapper[4751]: I1123 04:54:38.116008 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.135316 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.270025 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.286829 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.288847 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.439389 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.441329 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.460133 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/extract/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.618447 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5jtfh_18a8d35c-05ad-4057-b88e-e5f0d417678f/kube-rbac-proxy/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.658758 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5jtfh_18a8d35c-05ad-4057-b88e-e5f0d417678f/manager/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.686884 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-5hn9n_9762a11c-fd59-489a-9a95-7725f4d1c9e4/kube-rbac-proxy/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.823732 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-5hn9n_9762a11c-fd59-489a-9a95-7725f4d1c9e4/manager/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.847599 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vrm2h_5323f8b0-18ba-42eb-9a73-ee25c2592aea/manager/0.log" Nov 23 04:54:45 crc kubenswrapper[4751]: I1123 04:54:45.889542 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vrm2h_5323f8b0-18ba-42eb-9a73-ee25c2592aea/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.024780 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-hlrq4_934b92b0-4c8a-48d8-8514-ffe3d566b58d/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.134074 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-hlrq4_934b92b0-4c8a-48d8-8514-ffe3d566b58d/manager/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.175265 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-wh2rd_3ce24a56-0cc8-4f63-91da-dde87342529b/manager/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.221217 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-wh2rd_3ce24a56-0cc8-4f63-91da-dde87342529b/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.373143 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-js2kh_abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.376623 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-js2kh_abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa/manager/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.515463 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-447nt_59fa9d8a-cc64-478a-be71-fda41132aca9/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.574320 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-7pqmh_8e942272-3be5-4bec-b764-ab18709fbb4d/kube-rbac-proxy/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.690915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-447nt_59fa9d8a-cc64-478a-be71-fda41132aca9/manager/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.830743 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-7pqmh_8e942272-3be5-4bec-b764-ab18709fbb4d/manager/0.log" Nov 23 04:54:46 crc kubenswrapper[4751]: I1123 04:54:46.904985 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h6524_09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.020180 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h6524_09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb/manager/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.050741 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5755c_d877cc36-10d3-4ba0-8140-ad4f89a2b855/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.111379 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5755c_d877cc36-10d3-4ba0-8140-ad4f89a2b855/manager/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.234288 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-rr4vk_f65918da-cd67-4138-bcb4-1316d398b30e/manager/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.299124 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-rr4vk_f65918da-cd67-4138-bcb4-1316d398b30e/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.450271 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-6dwlq_c1d9b3d4-a044-46e5-be2c-463d728a4c5d/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.597437 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-6dwlq_c1d9b3d4-a044-46e5-be2c-463d728a4c5d/manager/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.619866 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-nw29b_3fe1e718-5530-4899-9a28-fbaa27ed08f4/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.836302 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-zstqk_7f713385-4fd0-462a-8812-ae2cc7ad910b/kube-rbac-proxy/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.851889 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-nw29b_3fe1e718-5530-4899-9a28-fbaa27ed08f4/manager/0.log" Nov 23 04:54:47 crc kubenswrapper[4751]: I1123 04:54:47.979245 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-zstqk_7f713385-4fd0-462a-8812-ae2cc7ad910b/manager/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.057725 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq_5176792c-6b3a-46cc-9ddb-5416391ce264/kube-rbac-proxy/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.093378 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq_5176792c-6b3a-46cc-9ddb-5416391ce264/manager/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.247088 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5849b9999b-qsxxk_96b8f5ab-6091-4283-b8ff-76a80465d4a0/kube-rbac-proxy/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.381779 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-549d6967c7-krhsr_292f5bac-dc69-4084-814f-509540c16426/kube-rbac-proxy/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.566321 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-549d6967c7-krhsr_292f5bac-dc69-4084-814f-509540c16426/operator/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.705474 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zm8mr_906bbca3-2aaf-47a8-ba3e-ee004ca911d6/registry-server/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.815896 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-scc77_d96aa695-fa54-4828-b37d-9c4e5121344a/kube-rbac-proxy/0.log" Nov 23 04:54:48 crc kubenswrapper[4751]: I1123 04:54:48.972496 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-scc77_d96aa695-fa54-4828-b37d-9c4e5121344a/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.026968 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-58plx_4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d/kube-rbac-proxy/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.084138 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-58plx_4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.194380 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd_0d5ce886-fa22-4fc1-a369-0311b8a22353/operator/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.284668 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-q87hr_b49833ae-797b-42aa-aa69-4ddc939dcad6/kube-rbac-proxy/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.303149 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5849b9999b-qsxxk_96b8f5ab-6091-4283-b8ff-76a80465d4a0/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.415083 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-q87hr_b49833ae-797b-42aa-aa69-4ddc939dcad6/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.439292 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8ldr4_763c6ab4-8128-46d4-9c53-45c1b2cd7ecc/kube-rbac-proxy/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.527773 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8ldr4_763c6ab4-8128-46d4-9c53-45c1b2cd7ecc/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.570117 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-7tkw8_79f85f0e-fcac-4778-8dac-0a2953ba9c8d/kube-rbac-proxy/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.622638 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-7tkw8_79f85f0e-fcac-4778-8dac-0a2953ba9c8d/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.678382 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-p76zg_fd3d207d-1aae-49de-984e-ca3ebf42f864/manager/0.log" Nov 23 04:54:49 crc kubenswrapper[4751]: I1123 04:54:49.706673 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-p76zg_fd3d207d-1aae-49de-984e-ca3ebf42f864/kube-rbac-proxy/0.log" Nov 23 04:55:06 crc kubenswrapper[4751]: I1123 04:55:06.542961 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cbnj9_fa7612e7-e0b7-4b66-a948-fc5bc3aa3033/control-plane-machine-set-operator/0.log" Nov 23 04:55:06 crc kubenswrapper[4751]: I1123 04:55:06.698248 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8kg8p_b65a94d8-c328-457e-ac66-f6d62f592d55/kube-rbac-proxy/0.log" Nov 23 04:55:06 crc kubenswrapper[4751]: I1123 04:55:06.716864 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8kg8p_b65a94d8-c328-457e-ac66-f6d62f592d55/machine-api-operator/0.log" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.115384 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.115776 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.115830 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.116885 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.116994 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" gracePeriod=600 Nov 23 04:55:08 crc kubenswrapper[4751]: E1123 04:55:08.275612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.777173 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" exitCode=0 Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.777221 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9"} Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.777266 4751 scope.go:117] "RemoveContainer" containerID="ed1541714abc2376ae800a85b9d54abcd651ea34aafcda60af88de4a01b521c1" Nov 23 04:55:08 crc kubenswrapper[4751]: I1123 04:55:08.778374 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:55:08 crc kubenswrapper[4751]: E1123 04:55:08.778853 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:55:20 crc kubenswrapper[4751]: I1123 04:55:20.005009 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-52xvz_fcc30abb-9ab6-4b0f-b27c-8772f6026dd7/cert-manager-controller/0.log" Nov 23 04:55:20 crc kubenswrapper[4751]: I1123 04:55:20.239705 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-n7m2q_ac86f9c3-7a1c-430f-abdc-3002de03a7df/cert-manager-cainjector/0.log" Nov 23 04:55:20 crc kubenswrapper[4751]: I1123 04:55:20.302477 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-rcqpp_6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3/cert-manager-webhook/0.log" Nov 23 04:55:23 crc kubenswrapper[4751]: I1123 04:55:23.644521 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:55:23 crc kubenswrapper[4751]: E1123 04:55:23.645254 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:55:33 crc kubenswrapper[4751]: I1123 04:55:33.755777 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-flnqj_6b49364f-9a9b-4be9-b128-1a1b708073cc/nmstate-console-plugin/0.log" Nov 23 04:55:33 crc kubenswrapper[4751]: I1123 04:55:33.941921 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9z6z_ac9fa491-4c47-4862-bb2f-96dd556da176/nmstate-handler/0.log" Nov 23 04:55:33 crc kubenswrapper[4751]: I1123 04:55:33.957719 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-k5jpj_b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5/kube-rbac-proxy/0.log" Nov 23 04:55:34 crc kubenswrapper[4751]: I1123 04:55:34.015429 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-k5jpj_b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5/nmstate-metrics/0.log" Nov 23 04:55:34 crc kubenswrapper[4751]: I1123 04:55:34.151193 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-dhvb2_9dff0d36-e2c4-4a48-a395-4ef9cae05540/nmstate-operator/0.log" Nov 23 04:55:34 crc kubenswrapper[4751]: I1123 04:55:34.253760 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-8tdcc_67521947-5803-47c5-95ee-ff1331b80d30/nmstate-webhook/0.log" Nov 23 04:55:36 crc kubenswrapper[4751]: I1123 04:55:36.644426 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:55:36 crc kubenswrapper[4751]: E1123 04:55:36.644886 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.517255 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kht5_0a9bcd23-2927-40fc-be78-28a85fd0c43c/kube-rbac-proxy/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.636225 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kht5_0a9bcd23-2927-40fc-be78-28a85fd0c43c/controller/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.699283 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.875150 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.881745 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.923156 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 04:55:49 crc kubenswrapper[4751]: I1123 04:55:49.941619 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.060507 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.094750 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.137697 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.188925 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.277737 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.296303 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.332613 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.372023 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/controller/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.447259 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/frr-metrics/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.537481 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/kube-rbac-proxy-frr/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.548701 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/kube-rbac-proxy/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.691605 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/reloader/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.757580 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-64sx8_52ebaa08-f93a-422b-8c95-728f7ad4a20c/frr-k8s-webhook-server/0.log" Nov 23 04:55:50 crc kubenswrapper[4751]: I1123 04:55:50.945972 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-859f4d786d-lx7n9_ed440de8-4a60-48c8-85e5-a0431415aa1e/manager/0.log" Nov 23 04:55:51 crc kubenswrapper[4751]: I1123 04:55:51.120625 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7577964656-7fb5v_5f06ddd2-0977-4bb4-954a-8bff2da8d49a/webhook-server/0.log" Nov 23 04:55:51 crc kubenswrapper[4751]: I1123 04:55:51.149653 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ps8wm_24d322b0-264c-482c-9daa-9ee340079d1f/kube-rbac-proxy/0.log" Nov 23 04:55:51 crc kubenswrapper[4751]: I1123 04:55:51.643605 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:55:51 crc kubenswrapper[4751]: E1123 04:55:51.644284 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:55:51 crc kubenswrapper[4751]: I1123 04:55:51.754305 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ps8wm_24d322b0-264c-482c-9daa-9ee340079d1f/speaker/0.log" Nov 23 04:55:51 crc kubenswrapper[4751]: I1123 04:55:51.898504 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/frr/0.log" Nov 23 04:56:02 crc kubenswrapper[4751]: I1123 04:56:02.644300 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:56:02 crc kubenswrapper[4751]: E1123 04:56:02.645282 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:56:03 crc kubenswrapper[4751]: I1123 04:56:03.689410 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 04:56:03 crc kubenswrapper[4751]: I1123 04:56:03.921188 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 04:56:03 crc kubenswrapper[4751]: I1123 04:56:03.940272 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 04:56:03 crc kubenswrapper[4751]: I1123 04:56:03.946429 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.128850 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.178658 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/extract/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.179092 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.322236 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.468448 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.469114 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.532148 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.629012 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.687019 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 04:56:04 crc kubenswrapper[4751]: I1123 04:56:04.834606 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.077943 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.103336 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.144417 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.158452 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/registry-server/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.271107 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.277416 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.497792 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.733549 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.738795 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/registry-server/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.750739 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.756465 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.882043 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.920407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 04:56:05 crc kubenswrapper[4751]: I1123 04:56:05.946723 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/extract/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.085517 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b4zhm_77b14a1a-54d8-4706-95b6-2b94d8dffa43/marketplace-operator/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.152729 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.290794 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.304685 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.351934 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.490842 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.492678 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.669628 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/registry-server/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.710303 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.852251 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.867811 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 04:56:06 crc kubenswrapper[4751]: I1123 04:56:06.882009 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 04:56:07 crc kubenswrapper[4751]: I1123 04:56:07.031405 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 04:56:07 crc kubenswrapper[4751]: I1123 04:56:07.031620 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 04:56:07 crc kubenswrapper[4751]: I1123 04:56:07.491365 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/registry-server/0.log" Nov 23 04:56:14 crc kubenswrapper[4751]: I1123 04:56:14.650920 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:56:14 crc kubenswrapper[4751]: E1123 04:56:14.652295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:56:27 crc kubenswrapper[4751]: I1123 04:56:27.644399 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:56:27 crc kubenswrapper[4751]: E1123 04:56:27.645198 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:56:41 crc kubenswrapper[4751]: I1123 04:56:41.644104 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:56:41 crc kubenswrapper[4751]: E1123 04:56:41.644872 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:56:55 crc kubenswrapper[4751]: I1123 04:56:55.644781 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:56:55 crc kubenswrapper[4751]: E1123 04:56:55.645723 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:57:08 crc kubenswrapper[4751]: I1123 04:57:08.649405 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:57:08 crc kubenswrapper[4751]: E1123 04:57:08.650217 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:57:19 crc kubenswrapper[4751]: I1123 04:57:19.644753 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:57:19 crc kubenswrapper[4751]: E1123 04:57:19.645711 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:57:31 crc kubenswrapper[4751]: I1123 04:57:31.644468 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:57:31 crc kubenswrapper[4751]: E1123 04:57:31.645217 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:57:44 crc kubenswrapper[4751]: I1123 04:57:44.452118 4751 generic.go:334] "Generic (PLEG): container finished" podID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerID="359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6" exitCode=0 Nov 23 04:57:44 crc kubenswrapper[4751]: I1123 04:57:44.452211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" event={"ID":"6631e16e-9bbd-4092-a130-5ba01098f7be","Type":"ContainerDied","Data":"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6"} Nov 23 04:57:44 crc kubenswrapper[4751]: I1123 04:57:44.453537 4751 scope.go:117] "RemoveContainer" containerID="359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6" Nov 23 04:57:44 crc kubenswrapper[4751]: I1123 04:57:44.729327 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgdbm_must-gather-4gcp9_6631e16e-9bbd-4092-a130-5ba01098f7be/gather/0.log" Nov 23 04:57:45 crc kubenswrapper[4751]: I1123 04:57:45.644019 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:57:45 crc kubenswrapper[4751]: E1123 04:57:45.644348 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:57:53 crc kubenswrapper[4751]: I1123 04:57:53.790574 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pgdbm/must-gather-4gcp9"] Nov 23 04:57:53 crc kubenswrapper[4751]: I1123 04:57:53.791518 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="copy" containerID="cri-o://ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb" gracePeriod=2 Nov 23 04:57:53 crc kubenswrapper[4751]: I1123 04:57:53.802010 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pgdbm/must-gather-4gcp9"] Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.248444 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgdbm_must-gather-4gcp9_6631e16e-9bbd-4092-a130-5ba01098f7be/copy/0.log" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.250029 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.369522 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output\") pod \"6631e16e-9bbd-4092-a130-5ba01098f7be\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.369573 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2s4f\" (UniqueName: \"kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f\") pod \"6631e16e-9bbd-4092-a130-5ba01098f7be\" (UID: \"6631e16e-9bbd-4092-a130-5ba01098f7be\") " Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.375664 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f" (OuterVolumeSpecName: "kube-api-access-q2s4f") pod "6631e16e-9bbd-4092-a130-5ba01098f7be" (UID: "6631e16e-9bbd-4092-a130-5ba01098f7be"). InnerVolumeSpecName "kube-api-access-q2s4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.471672 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2s4f\" (UniqueName: \"kubernetes.io/projected/6631e16e-9bbd-4092-a130-5ba01098f7be-kube-api-access-q2s4f\") on node \"crc\" DevicePath \"\"" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.515284 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6631e16e-9bbd-4092-a130-5ba01098f7be" (UID: "6631e16e-9bbd-4092-a130-5ba01098f7be"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.566886 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pgdbm_must-gather-4gcp9_6631e16e-9bbd-4092-a130-5ba01098f7be/copy/0.log" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.567442 4751 generic.go:334] "Generic (PLEG): container finished" podID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerID="ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb" exitCode=143 Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.567510 4751 scope.go:117] "RemoveContainer" containerID="ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.567598 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pgdbm/must-gather-4gcp9" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.573366 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6631e16e-9bbd-4092-a130-5ba01098f7be-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.588731 4751 scope.go:117] "RemoveContainer" containerID="359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.620343 4751 scope.go:117] "RemoveContainer" containerID="ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb" Nov 23 04:57:54 crc kubenswrapper[4751]: E1123 04:57:54.620804 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb\": container with ID starting with ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb not found: ID does not exist" containerID="ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.620847 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb"} err="failed to get container status \"ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb\": rpc error: code = NotFound desc = could not find container \"ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb\": container with ID starting with ee17e9b990f39095ec5777b28c148cfc10bdc8417032e63e44aad4c3b2d1b5fb not found: ID does not exist" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.620883 4751 scope.go:117] "RemoveContainer" containerID="359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6" Nov 23 04:57:54 crc kubenswrapper[4751]: E1123 04:57:54.621233 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6\": container with ID starting with 359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6 not found: ID does not exist" containerID="359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.621269 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6"} err="failed to get container status \"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6\": rpc error: code = NotFound desc = could not find container \"359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6\": container with ID starting with 359c3ecbabab23f6d7465560dbe7336986fb8b3149df379adf055fbb932182c6 not found: ID does not exist" Nov 23 04:57:54 crc kubenswrapper[4751]: I1123 04:57:54.653509 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" path="/var/lib/kubelet/pods/6631e16e-9bbd-4092-a130-5ba01098f7be/volumes" Nov 23 04:57:59 crc kubenswrapper[4751]: I1123 04:57:59.644865 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:57:59 crc kubenswrapper[4751]: E1123 04:57:59.645902 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:58:11 crc kubenswrapper[4751]: I1123 04:58:11.644650 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:58:11 crc kubenswrapper[4751]: E1123 04:58:11.646035 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.550647 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:12 crc kubenswrapper[4751]: E1123 04:58:12.551251 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b03e8a5-b7dd-403e-854f-2eb569c18f3c" containerName="container-00" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551266 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b03e8a5-b7dd-403e-854f-2eb569c18f3c" containerName="container-00" Nov 23 04:58:12 crc kubenswrapper[4751]: E1123 04:58:12.551299 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="copy" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551305 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="copy" Nov 23 04:58:12 crc kubenswrapper[4751]: E1123 04:58:12.551316 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="gather" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551324 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="gather" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551527 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="copy" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551542 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6631e16e-9bbd-4092-a130-5ba01098f7be" containerName="gather" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.551551 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b03e8a5-b7dd-403e-854f-2eb569c18f3c" containerName="container-00" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.552725 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.578963 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.653086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.653169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fq6\" (UniqueName: \"kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.653270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.755201 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.755330 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.755400 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72fq6\" (UniqueName: \"kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.755966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.756223 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.783807 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72fq6\" (UniqueName: \"kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6\") pod \"community-operators-9ktzx\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:12 crc kubenswrapper[4751]: I1123 04:58:12.883395 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:13 crc kubenswrapper[4751]: I1123 04:58:13.403790 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:13 crc kubenswrapper[4751]: W1123 04:58:13.405776 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80134206_7f1a_4b1b_9435_d9854ab45935.slice/crio-b42485f58d23b4eb12559247d7701980239517a53a5e1ff0198594646ae6f587 WatchSource:0}: Error finding container b42485f58d23b4eb12559247d7701980239517a53a5e1ff0198594646ae6f587: Status 404 returned error can't find the container with id b42485f58d23b4eb12559247d7701980239517a53a5e1ff0198594646ae6f587 Nov 23 04:58:13 crc kubenswrapper[4751]: I1123 04:58:13.789135 4751 generic.go:334] "Generic (PLEG): container finished" podID="80134206-7f1a-4b1b-9435-d9854ab45935" containerID="28e9368aa5b345388bb2897aa675f22b9b8fdb1d203c0b9ab659f0315e55673b" exitCode=0 Nov 23 04:58:13 crc kubenswrapper[4751]: I1123 04:58:13.789230 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerDied","Data":"28e9368aa5b345388bb2897aa675f22b9b8fdb1d203c0b9ab659f0315e55673b"} Nov 23 04:58:13 crc kubenswrapper[4751]: I1123 04:58:13.789525 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerStarted","Data":"b42485f58d23b4eb12559247d7701980239517a53a5e1ff0198594646ae6f587"} Nov 23 04:58:13 crc kubenswrapper[4751]: I1123 04:58:13.791752 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 04:58:14 crc kubenswrapper[4751]: I1123 04:58:14.812615 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerStarted","Data":"c645bd3bec89ab2e7502c407479891d08284c44886e59fcb671534133e981888"} Nov 23 04:58:15 crc kubenswrapper[4751]: I1123 04:58:15.833437 4751 generic.go:334] "Generic (PLEG): container finished" podID="80134206-7f1a-4b1b-9435-d9854ab45935" containerID="c645bd3bec89ab2e7502c407479891d08284c44886e59fcb671534133e981888" exitCode=0 Nov 23 04:58:15 crc kubenswrapper[4751]: I1123 04:58:15.833556 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerDied","Data":"c645bd3bec89ab2e7502c407479891d08284c44886e59fcb671534133e981888"} Nov 23 04:58:16 crc kubenswrapper[4751]: I1123 04:58:16.846789 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerStarted","Data":"485080f52788cd9dcf10a5d7c50d3e638019316e0605e8abfd9e7b5c70f91315"} Nov 23 04:58:16 crc kubenswrapper[4751]: I1123 04:58:16.885516 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ktzx" podStartSLOduration=2.424719131 podStartE2EDuration="4.885483023s" podCreationTimestamp="2025-11-23 04:58:12 +0000 UTC" firstStartedPulling="2025-11-23 04:58:13.791174563 +0000 UTC m=+3789.984845962" lastFinishedPulling="2025-11-23 04:58:16.251938455 +0000 UTC m=+3792.445609854" observedRunningTime="2025-11-23 04:58:16.870215737 +0000 UTC m=+3793.063887106" watchObservedRunningTime="2025-11-23 04:58:16.885483023 +0000 UTC m=+3793.079154422" Nov 23 04:58:22 crc kubenswrapper[4751]: I1123 04:58:22.883900 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:22 crc kubenswrapper[4751]: I1123 04:58:22.884606 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:22 crc kubenswrapper[4751]: I1123 04:58:22.962725 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:23 crc kubenswrapper[4751]: I1123 04:58:23.043705 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:23 crc kubenswrapper[4751]: I1123 04:58:23.215258 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:24 crc kubenswrapper[4751]: I1123 04:58:24.953486 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ktzx" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="registry-server" containerID="cri-o://485080f52788cd9dcf10a5d7c50d3e638019316e0605e8abfd9e7b5c70f91315" gracePeriod=2 Nov 23 04:58:25 crc kubenswrapper[4751]: I1123 04:58:25.645141 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:58:25 crc kubenswrapper[4751]: E1123 04:58:25.645772 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:58:25 crc kubenswrapper[4751]: I1123 04:58:25.969710 4751 generic.go:334] "Generic (PLEG): container finished" podID="80134206-7f1a-4b1b-9435-d9854ab45935" containerID="485080f52788cd9dcf10a5d7c50d3e638019316e0605e8abfd9e7b5c70f91315" exitCode=0 Nov 23 04:58:25 crc kubenswrapper[4751]: I1123 04:58:25.969769 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerDied","Data":"485080f52788cd9dcf10a5d7c50d3e638019316e0605e8abfd9e7b5c70f91315"} Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.741269 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.855886 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72fq6\" (UniqueName: \"kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6\") pod \"80134206-7f1a-4b1b-9435-d9854ab45935\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.856417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content\") pod \"80134206-7f1a-4b1b-9435-d9854ab45935\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.856467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities\") pod \"80134206-7f1a-4b1b-9435-d9854ab45935\" (UID: \"80134206-7f1a-4b1b-9435-d9854ab45935\") " Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.858599 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities" (OuterVolumeSpecName: "utilities") pod "80134206-7f1a-4b1b-9435-d9854ab45935" (UID: "80134206-7f1a-4b1b-9435-d9854ab45935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.864056 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6" (OuterVolumeSpecName: "kube-api-access-72fq6") pod "80134206-7f1a-4b1b-9435-d9854ab45935" (UID: "80134206-7f1a-4b1b-9435-d9854ab45935"). InnerVolumeSpecName "kube-api-access-72fq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.949260 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80134206-7f1a-4b1b-9435-d9854ab45935" (UID: "80134206-7f1a-4b1b-9435-d9854ab45935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.959811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72fq6\" (UniqueName: \"kubernetes.io/projected/80134206-7f1a-4b1b-9435-d9854ab45935-kube-api-access-72fq6\") on node \"crc\" DevicePath \"\"" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.959856 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.959877 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80134206-7f1a-4b1b-9435-d9854ab45935-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.985803 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ktzx" event={"ID":"80134206-7f1a-4b1b-9435-d9854ab45935","Type":"ContainerDied","Data":"b42485f58d23b4eb12559247d7701980239517a53a5e1ff0198594646ae6f587"} Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.985899 4751 scope.go:117] "RemoveContainer" containerID="485080f52788cd9dcf10a5d7c50d3e638019316e0605e8abfd9e7b5c70f91315" Nov 23 04:58:26 crc kubenswrapper[4751]: I1123 04:58:26.985932 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ktzx" Nov 23 04:58:27 crc kubenswrapper[4751]: I1123 04:58:27.017405 4751 scope.go:117] "RemoveContainer" containerID="c645bd3bec89ab2e7502c407479891d08284c44886e59fcb671534133e981888" Nov 23 04:58:27 crc kubenswrapper[4751]: I1123 04:58:27.037846 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:27 crc kubenswrapper[4751]: I1123 04:58:27.047300 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ktzx"] Nov 23 04:58:27 crc kubenswrapper[4751]: I1123 04:58:27.052824 4751 scope.go:117] "RemoveContainer" containerID="28e9368aa5b345388bb2897aa675f22b9b8fdb1d203c0b9ab659f0315e55673b" Nov 23 04:58:28 crc kubenswrapper[4751]: I1123 04:58:28.656546 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" path="/var/lib/kubelet/pods/80134206-7f1a-4b1b-9435-d9854ab45935/volumes" Nov 23 04:58:38 crc kubenswrapper[4751]: I1123 04:58:38.645384 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:58:38 crc kubenswrapper[4751]: E1123 04:58:38.646838 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:58:50 crc kubenswrapper[4751]: I1123 04:58:50.644975 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:58:50 crc kubenswrapper[4751]: E1123 04:58:50.645988 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:59:01 crc kubenswrapper[4751]: I1123 04:59:01.644506 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:59:01 crc kubenswrapper[4751]: E1123 04:59:01.645621 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:59:15 crc kubenswrapper[4751]: I1123 04:59:15.644928 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:59:15 crc kubenswrapper[4751]: E1123 04:59:15.648379 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:59:30 crc kubenswrapper[4751]: I1123 04:59:30.644729 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:59:30 crc kubenswrapper[4751]: E1123 04:59:30.645952 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:59:45 crc kubenswrapper[4751]: I1123 04:59:45.645016 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:59:45 crc kubenswrapper[4751]: E1123 04:59:45.646563 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 04:59:56 crc kubenswrapper[4751]: I1123 04:59:56.644603 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 04:59:56 crc kubenswrapper[4751]: E1123 04:59:56.645362 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.152099 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z"] Nov 23 05:00:00 crc kubenswrapper[4751]: E1123 05:00:00.152898 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="extract-content" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.152932 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="extract-content" Nov 23 05:00:00 crc kubenswrapper[4751]: E1123 05:00:00.152960 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="extract-utilities" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.152969 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="extract-utilities" Nov 23 05:00:00 crc kubenswrapper[4751]: E1123 05:00:00.152983 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="registry-server" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.152991 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="registry-server" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.153290 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="80134206-7f1a-4b1b-9435-d9854ab45935" containerName="registry-server" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.154036 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.155963 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.157444 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.162430 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z"] Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.261188 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h8nq\" (UniqueName: \"kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.261679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.261944 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.364408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.364588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.364751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h8nq\" (UniqueName: \"kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.366142 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.374174 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.388383 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h8nq\" (UniqueName: \"kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq\") pod \"collect-profiles-29397900-zsp5z\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.481039 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:00 crc kubenswrapper[4751]: I1123 05:00:00.985207 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z"] Nov 23 05:00:00 crc kubenswrapper[4751]: W1123 05:00:00.991066 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72ef3739_293c_4912_923e_4ac328274d99.slice/crio-31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0 WatchSource:0}: Error finding container 31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0: Status 404 returned error can't find the container with id 31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0 Nov 23 05:00:01 crc kubenswrapper[4751]: I1123 05:00:01.009935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" event={"ID":"72ef3739-293c-4912-923e-4ac328274d99","Type":"ContainerStarted","Data":"31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0"} Nov 23 05:00:02 crc kubenswrapper[4751]: I1123 05:00:02.022717 4751 generic.go:334] "Generic (PLEG): container finished" podID="72ef3739-293c-4912-923e-4ac328274d99" containerID="f8d6b1eb253e0e5ec9f87c85b80af8ef548612e07ee6aab8dec30097b74c48ce" exitCode=0 Nov 23 05:00:02 crc kubenswrapper[4751]: I1123 05:00:02.022840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" event={"ID":"72ef3739-293c-4912-923e-4ac328274d99","Type":"ContainerDied","Data":"f8d6b1eb253e0e5ec9f87c85b80af8ef548612e07ee6aab8dec30097b74c48ce"} Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.432814 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.522577 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume\") pod \"72ef3739-293c-4912-923e-4ac328274d99\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.522663 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h8nq\" (UniqueName: \"kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq\") pod \"72ef3739-293c-4912-923e-4ac328274d99\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.522724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume\") pod \"72ef3739-293c-4912-923e-4ac328274d99\" (UID: \"72ef3739-293c-4912-923e-4ac328274d99\") " Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.523912 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume" (OuterVolumeSpecName: "config-volume") pod "72ef3739-293c-4912-923e-4ac328274d99" (UID: "72ef3739-293c-4912-923e-4ac328274d99"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.530248 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq" (OuterVolumeSpecName: "kube-api-access-9h8nq") pod "72ef3739-293c-4912-923e-4ac328274d99" (UID: "72ef3739-293c-4912-923e-4ac328274d99"). InnerVolumeSpecName "kube-api-access-9h8nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.534139 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "72ef3739-293c-4912-923e-4ac328274d99" (UID: "72ef3739-293c-4912-923e-4ac328274d99"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.624470 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h8nq\" (UniqueName: \"kubernetes.io/projected/72ef3739-293c-4912-923e-4ac328274d99-kube-api-access-9h8nq\") on node \"crc\" DevicePath \"\"" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.624749 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72ef3739-293c-4912-923e-4ac328274d99-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 05:00:03 crc kubenswrapper[4751]: I1123 05:00:03.624841 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/72ef3739-293c-4912-923e-4ac328274d99-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.064415 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" event={"ID":"72ef3739-293c-4912-923e-4ac328274d99","Type":"ContainerDied","Data":"31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0"} Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.064513 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31004ed28a2b03c41ed33ded4b637a016b9d21d139fdf1e977e33985832355e0" Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.064463 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29397900-zsp5z" Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.509507 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt"] Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.518334 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29397855-mg6jt"] Nov 23 05:00:04 crc kubenswrapper[4751]: I1123 05:00:04.661830 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c2aada-719a-44a4-b3d4-1db9b3ba2f5e" path="/var/lib/kubelet/pods/14c2aada-719a-44a4-b3d4-1db9b3ba2f5e/volumes" Nov 23 05:00:08 crc kubenswrapper[4751]: I1123 05:00:08.644187 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 05:00:09 crc kubenswrapper[4751]: I1123 05:00:09.140698 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60"} Nov 23 05:00:22 crc kubenswrapper[4751]: I1123 05:00:22.428330 4751 scope.go:117] "RemoveContainer" containerID="9bd3c43aa80b22bc934374be7d27fce1a151512459c50aec3fd60ebee1096df9" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.791336 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fcf8q/must-gather-ffw5v"] Nov 23 05:00:31 crc kubenswrapper[4751]: E1123 05:00:31.792194 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ef3739-293c-4912-923e-4ac328274d99" containerName="collect-profiles" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.792207 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ef3739-293c-4912-923e-4ac328274d99" containerName="collect-profiles" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.792387 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ef3739-293c-4912-923e-4ac328274d99" containerName="collect-profiles" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.793288 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:31 crc kubenswrapper[4751]: W1123 05:00:31.794838 4751 reflector.go:561] object-"openshift-must-gather-fcf8q"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-must-gather-fcf8q": no relationship found between node 'crc' and this object Nov 23 05:00:31 crc kubenswrapper[4751]: E1123 05:00:31.794889 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-fcf8q\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-must-gather-fcf8q\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 05:00:31 crc kubenswrapper[4751]: W1123 05:00:31.795019 4751 reflector.go:561] object-"openshift-must-gather-fcf8q"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-must-gather-fcf8q": no relationship found between node 'crc' and this object Nov 23 05:00:31 crc kubenswrapper[4751]: E1123 05:00:31.795050 4751 reflector.go:158] "Unhandled Error" err="object-\"openshift-must-gather-fcf8q\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-must-gather-fcf8q\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.826665 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fcf8q/must-gather-ffw5v"] Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.942675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:31 crc kubenswrapper[4751]: I1123 05:00:31.943125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:32 crc kubenswrapper[4751]: I1123 05:00:32.045558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:32 crc kubenswrapper[4751]: I1123 05:00:32.045653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:32 crc kubenswrapper[4751]: I1123 05:00:32.046027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:32 crc kubenswrapper[4751]: I1123 05:00:32.863971 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fcf8q"/"openshift-service-ca.crt" Nov 23 05:00:33 crc kubenswrapper[4751]: E1123 05:00:33.062114 4751 projected.go:288] Couldn't get configMap openshift-must-gather-fcf8q/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 23 05:00:33 crc kubenswrapper[4751]: E1123 05:00:33.062159 4751 projected.go:194] Error preparing data for projected volume kube-api-access-98g6q for pod openshift-must-gather-fcf8q/must-gather-ffw5v: failed to sync configmap cache: timed out waiting for the condition Nov 23 05:00:33 crc kubenswrapper[4751]: E1123 05:00:33.062217 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q podName:98de9830-ee0a-4453-a26d-0e456c1eef34 nodeName:}" failed. No retries permitted until 2025-11-23 05:00:33.562201095 +0000 UTC m=+3929.755872454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98g6q" (UniqueName: "kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q") pod "must-gather-ffw5v" (UID: "98de9830-ee0a-4453-a26d-0e456c1eef34") : failed to sync configmap cache: timed out waiting for the condition Nov 23 05:00:33 crc kubenswrapper[4751]: I1123 05:00:33.194967 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fcf8q"/"kube-root-ca.crt" Nov 23 05:00:33 crc kubenswrapper[4751]: I1123 05:00:33.572271 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:33 crc kubenswrapper[4751]: I1123 05:00:33.582073 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") pod \"must-gather-ffw5v\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:33 crc kubenswrapper[4751]: I1123 05:00:33.619767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:00:34 crc kubenswrapper[4751]: I1123 05:00:34.075488 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fcf8q/must-gather-ffw5v"] Nov 23 05:00:34 crc kubenswrapper[4751]: I1123 05:00:34.471024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" event={"ID":"98de9830-ee0a-4453-a26d-0e456c1eef34","Type":"ContainerStarted","Data":"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc"} Nov 23 05:00:34 crc kubenswrapper[4751]: I1123 05:00:34.471267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" event={"ID":"98de9830-ee0a-4453-a26d-0e456c1eef34","Type":"ContainerStarted","Data":"e2e907374c0d2e94ab24371ff4963bd65675fe30196b105b0a37b77e7b01bcf8"} Nov 23 05:00:35 crc kubenswrapper[4751]: I1123 05:00:35.485986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" event={"ID":"98de9830-ee0a-4453-a26d-0e456c1eef34","Type":"ContainerStarted","Data":"728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824"} Nov 23 05:00:35 crc kubenswrapper[4751]: I1123 05:00:35.506108 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" podStartSLOduration=4.506087528 podStartE2EDuration="4.506087528s" podCreationTimestamp="2025-11-23 05:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 05:00:35.501219101 +0000 UTC m=+3931.694890500" watchObservedRunningTime="2025-11-23 05:00:35.506087528 +0000 UTC m=+3931.699758887" Nov 23 05:00:37 crc kubenswrapper[4751]: I1123 05:00:37.805349 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-765n9"] Nov 23 05:00:37 crc kubenswrapper[4751]: I1123 05:00:37.807416 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:37 crc kubenswrapper[4751]: I1123 05:00:37.809462 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fcf8q"/"default-dockercfg-7wxv7" Nov 23 05:00:37 crc kubenswrapper[4751]: I1123 05:00:37.957004 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82fn\" (UniqueName: \"kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:37 crc kubenswrapper[4751]: I1123 05:00:37.957335 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.058976 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82fn\" (UniqueName: \"kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.059371 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.059609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.077236 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82fn\" (UniqueName: \"kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn\") pod \"crc-debug-765n9\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.128515 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:00:38 crc kubenswrapper[4751]: W1123 05:00:38.162426 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca455535_6240_4fee_bd6b_b165994d6b13.slice/crio-881d5dbcae10458f34ef8dd9f2c894f8da847472315514927acc9151be763a46 WatchSource:0}: Error finding container 881d5dbcae10458f34ef8dd9f2c894f8da847472315514927acc9151be763a46: Status 404 returned error can't find the container with id 881d5dbcae10458f34ef8dd9f2c894f8da847472315514927acc9151be763a46 Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.518001 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-765n9" event={"ID":"ca455535-6240-4fee-bd6b-b165994d6b13","Type":"ContainerStarted","Data":"cfccae2001eb37b087a249d8ccae204adaa84087f52195406ae3e5fb343114bb"} Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.518564 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-765n9" event={"ID":"ca455535-6240-4fee-bd6b-b165994d6b13","Type":"ContainerStarted","Data":"881d5dbcae10458f34ef8dd9f2c894f8da847472315514927acc9151be763a46"} Nov 23 05:00:38 crc kubenswrapper[4751]: I1123 05:00:38.535991 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fcf8q/crc-debug-765n9" podStartSLOduration=1.5359758650000002 podStartE2EDuration="1.535975865s" podCreationTimestamp="2025-11-23 05:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 05:00:38.533707096 +0000 UTC m=+3934.727378465" watchObservedRunningTime="2025-11-23 05:00:38.535975865 +0000 UTC m=+3934.729647224" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.161803 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29397901-khbbr"] Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.163471 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.166736 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.166781 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.166900 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll58n\" (UniqueName: \"kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.167017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.177770 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29397901-khbbr"] Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.268153 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.268465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.268575 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll58n\" (UniqueName: \"kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.268650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.274468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.280303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.280369 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.287300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll58n\" (UniqueName: \"kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n\") pod \"keystone-cron-29397901-khbbr\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.482002 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:00 crc kubenswrapper[4751]: I1123 05:01:00.975841 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29397901-khbbr"] Nov 23 05:01:01 crc kubenswrapper[4751]: I1123 05:01:01.716784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29397901-khbbr" event={"ID":"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f","Type":"ContainerStarted","Data":"d59783ff89f8c97fd758d00dcef64839de24422c3926035535a39bc44c5d1a0b"} Nov 23 05:01:01 crc kubenswrapper[4751]: I1123 05:01:01.717132 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29397901-khbbr" event={"ID":"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f","Type":"ContainerStarted","Data":"ea9709c511e335557b14d56a584a8924aed2f0ba1665947602115604e580d9ac"} Nov 23 05:01:01 crc kubenswrapper[4751]: I1123 05:01:01.743249 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29397901-khbbr" podStartSLOduration=1.743223209 podStartE2EDuration="1.743223209s" podCreationTimestamp="2025-11-23 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 05:01:01.732677298 +0000 UTC m=+3957.926348667" watchObservedRunningTime="2025-11-23 05:01:01.743223209 +0000 UTC m=+3957.936894568" Nov 23 05:01:03 crc kubenswrapper[4751]: I1123 05:01:03.735149 4751 generic.go:334] "Generic (PLEG): container finished" podID="8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" containerID="d59783ff89f8c97fd758d00dcef64839de24422c3926035535a39bc44c5d1a0b" exitCode=0 Nov 23 05:01:03 crc kubenswrapper[4751]: I1123 05:01:03.735395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29397901-khbbr" event={"ID":"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f","Type":"ContainerDied","Data":"d59783ff89f8c97fd758d00dcef64839de24422c3926035535a39bc44c5d1a0b"} Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.145636 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.261589 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle\") pod \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.261650 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys\") pod \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.261823 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data\") pod \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.261917 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll58n\" (UniqueName: \"kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n\") pod \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\" (UID: \"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f\") " Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.267578 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n" (OuterVolumeSpecName: "kube-api-access-ll58n") pod "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" (UID: "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f"). InnerVolumeSpecName "kube-api-access-ll58n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.267909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" (UID: "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.293208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" (UID: "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.315582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data" (OuterVolumeSpecName: "config-data") pod "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" (UID: "8df5b4a7-b9ad-4335-9b22-a7d735f70f6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.364609 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.364647 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.364658 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.364669 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll58n\" (UniqueName: \"kubernetes.io/projected/8df5b4a7-b9ad-4335-9b22-a7d735f70f6f-kube-api-access-ll58n\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.757394 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29397901-khbbr" event={"ID":"8df5b4a7-b9ad-4335-9b22-a7d735f70f6f","Type":"ContainerDied","Data":"ea9709c511e335557b14d56a584a8924aed2f0ba1665947602115604e580d9ac"} Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.757430 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9709c511e335557b14d56a584a8924aed2f0ba1665947602115604e580d9ac" Nov 23 05:01:05 crc kubenswrapper[4751]: I1123 05:01:05.757482 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29397901-khbbr" Nov 23 05:01:11 crc kubenswrapper[4751]: I1123 05:01:11.811909 4751 generic.go:334] "Generic (PLEG): container finished" podID="ca455535-6240-4fee-bd6b-b165994d6b13" containerID="cfccae2001eb37b087a249d8ccae204adaa84087f52195406ae3e5fb343114bb" exitCode=0 Nov 23 05:01:11 crc kubenswrapper[4751]: I1123 05:01:11.811979 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-765n9" event={"ID":"ca455535-6240-4fee-bd6b-b165994d6b13","Type":"ContainerDied","Data":"cfccae2001eb37b087a249d8ccae204adaa84087f52195406ae3e5fb343114bb"} Nov 23 05:01:12 crc kubenswrapper[4751]: I1123 05:01:12.983714 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.020904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-765n9"] Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.029550 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-765n9"] Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.098749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p82fn\" (UniqueName: \"kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn\") pod \"ca455535-6240-4fee-bd6b-b165994d6b13\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.098923 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host\") pod \"ca455535-6240-4fee-bd6b-b165994d6b13\" (UID: \"ca455535-6240-4fee-bd6b-b165994d6b13\") " Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.099082 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host" (OuterVolumeSpecName: "host") pod "ca455535-6240-4fee-bd6b-b165994d6b13" (UID: "ca455535-6240-4fee-bd6b-b165994d6b13"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.099397 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca455535-6240-4fee-bd6b-b165994d6b13-host\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.112718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn" (OuterVolumeSpecName: "kube-api-access-p82fn") pod "ca455535-6240-4fee-bd6b-b165994d6b13" (UID: "ca455535-6240-4fee-bd6b-b165994d6b13"). InnerVolumeSpecName "kube-api-access-p82fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.201467 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p82fn\" (UniqueName: \"kubernetes.io/projected/ca455535-6240-4fee-bd6b-b165994d6b13-kube-api-access-p82fn\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.865887 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881d5dbcae10458f34ef8dd9f2c894f8da847472315514927acc9151be763a46" Nov 23 05:01:13 crc kubenswrapper[4751]: I1123 05:01:13.865987 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-765n9" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.215825 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-5nc94"] Nov 23 05:01:14 crc kubenswrapper[4751]: E1123 05:01:14.216294 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca455535-6240-4fee-bd6b-b165994d6b13" containerName="container-00" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.216313 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca455535-6240-4fee-bd6b-b165994d6b13" containerName="container-00" Nov 23 05:01:14 crc kubenswrapper[4751]: E1123 05:01:14.216389 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" containerName="keystone-cron" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.216402 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" containerName="keystone-cron" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.216678 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca455535-6240-4fee-bd6b-b165994d6b13" containerName="container-00" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.216727 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df5b4a7-b9ad-4335-9b22-a7d735f70f6f" containerName="keystone-cron" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.217678 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.219859 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fcf8q"/"default-dockercfg-7wxv7" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.322225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.322673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnp4\" (UniqueName: \"kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.425133 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.425481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prnp4\" (UniqueName: \"kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.425622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.460234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prnp4\" (UniqueName: \"kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4\") pod \"crc-debug-5nc94\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.536600 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:14 crc kubenswrapper[4751]: W1123 05:01:14.574231 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfcf1687_dc60_4529_9e38_96246407e1a7.slice/crio-cb5ca6a2b996660d2eb88575667d698cfe6e1482d615db8af11ede6a1a3195ce WatchSource:0}: Error finding container cb5ca6a2b996660d2eb88575667d698cfe6e1482d615db8af11ede6a1a3195ce: Status 404 returned error can't find the container with id cb5ca6a2b996660d2eb88575667d698cfe6e1482d615db8af11ede6a1a3195ce Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.660630 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca455535-6240-4fee-bd6b-b165994d6b13" path="/var/lib/kubelet/pods/ca455535-6240-4fee-bd6b-b165994d6b13/volumes" Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.880208 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" event={"ID":"bfcf1687-dc60-4529-9e38-96246407e1a7","Type":"ContainerStarted","Data":"81fac8fad68608a0acabb44f5f445640f88557dc876c9bd2a167029ade904494"} Nov 23 05:01:14 crc kubenswrapper[4751]: I1123 05:01:14.880633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" event={"ID":"bfcf1687-dc60-4529-9e38-96246407e1a7","Type":"ContainerStarted","Data":"cb5ca6a2b996660d2eb88575667d698cfe6e1482d615db8af11ede6a1a3195ce"} Nov 23 05:01:15 crc kubenswrapper[4751]: I1123 05:01:15.427223 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-5nc94"] Nov 23 05:01:15 crc kubenswrapper[4751]: I1123 05:01:15.436832 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-5nc94"] Nov 23 05:01:15 crc kubenswrapper[4751]: I1123 05:01:15.893285 4751 generic.go:334] "Generic (PLEG): container finished" podID="bfcf1687-dc60-4529-9e38-96246407e1a7" containerID="81fac8fad68608a0acabb44f5f445640f88557dc876c9bd2a167029ade904494" exitCode=0 Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.006510 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.156999 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prnp4\" (UniqueName: \"kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4\") pod \"bfcf1687-dc60-4529-9e38-96246407e1a7\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.157629 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host\") pod \"bfcf1687-dc60-4529-9e38-96246407e1a7\" (UID: \"bfcf1687-dc60-4529-9e38-96246407e1a7\") " Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.157719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host" (OuterVolumeSpecName: "host") pod "bfcf1687-dc60-4529-9e38-96246407e1a7" (UID: "bfcf1687-dc60-4529-9e38-96246407e1a7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.158778 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfcf1687-dc60-4529-9e38-96246407e1a7-host\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.162452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4" (OuterVolumeSpecName: "kube-api-access-prnp4") pod "bfcf1687-dc60-4529-9e38-96246407e1a7" (UID: "bfcf1687-dc60-4529-9e38-96246407e1a7"). InnerVolumeSpecName "kube-api-access-prnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.260280 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prnp4\" (UniqueName: \"kubernetes.io/projected/bfcf1687-dc60-4529-9e38-96246407e1a7-kube-api-access-prnp4\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.627697 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-2vdf7"] Nov 23 05:01:16 crc kubenswrapper[4751]: E1123 05:01:16.628086 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfcf1687-dc60-4529-9e38-96246407e1a7" containerName="container-00" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.628098 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfcf1687-dc60-4529-9e38-96246407e1a7" containerName="container-00" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.628283 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfcf1687-dc60-4529-9e38-96246407e1a7" containerName="container-00" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.628889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.653411 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfcf1687-dc60-4529-9e38-96246407e1a7" path="/var/lib/kubelet/pods/bfcf1687-dc60-4529-9e38-96246407e1a7/volumes" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.770792 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlhq\" (UniqueName: \"kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.773314 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.876242 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlhq\" (UniqueName: \"kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.876512 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.876626 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.895996 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlhq\" (UniqueName: \"kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq\") pod \"crc-debug-2vdf7\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.907693 4751 scope.go:117] "RemoveContainer" containerID="81fac8fad68608a0acabb44f5f445640f88557dc876c9bd2a167029ade904494" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.907968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-5nc94" Nov 23 05:01:16 crc kubenswrapper[4751]: I1123 05:01:16.944108 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:16 crc kubenswrapper[4751]: W1123 05:01:16.991257 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c254f86_58d8_4456_b02c_e2b526468f58.slice/crio-0bd4c1ead8d122d8c03a5e46ca819f2f49c962373a107a516363c670366fea09 WatchSource:0}: Error finding container 0bd4c1ead8d122d8c03a5e46ca819f2f49c962373a107a516363c670366fea09: Status 404 returned error can't find the container with id 0bd4c1ead8d122d8c03a5e46ca819f2f49c962373a107a516363c670366fea09 Nov 23 05:01:17 crc kubenswrapper[4751]: I1123 05:01:17.921465 4751 generic.go:334] "Generic (PLEG): container finished" podID="4c254f86-58d8-4456-b02c-e2b526468f58" containerID="c848fff1e8a3ee8d73f4c715c522c8053796af6c00d8402fdbcd1c4ad558de1d" exitCode=0 Nov 23 05:01:17 crc kubenswrapper[4751]: I1123 05:01:17.921582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" event={"ID":"4c254f86-58d8-4456-b02c-e2b526468f58","Type":"ContainerDied","Data":"c848fff1e8a3ee8d73f4c715c522c8053796af6c00d8402fdbcd1c4ad558de1d"} Nov 23 05:01:17 crc kubenswrapper[4751]: I1123 05:01:17.922662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" event={"ID":"4c254f86-58d8-4456-b02c-e2b526468f58","Type":"ContainerStarted","Data":"0bd4c1ead8d122d8c03a5e46ca819f2f49c962373a107a516363c670366fea09"} Nov 23 05:01:17 crc kubenswrapper[4751]: I1123 05:01:17.965912 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-2vdf7"] Nov 23 05:01:17 crc kubenswrapper[4751]: I1123 05:01:17.976779 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fcf8q/crc-debug-2vdf7"] Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.023326 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.119931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzlhq\" (UniqueName: \"kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq\") pod \"4c254f86-58d8-4456-b02c-e2b526468f58\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.120478 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host\") pod \"4c254f86-58d8-4456-b02c-e2b526468f58\" (UID: \"4c254f86-58d8-4456-b02c-e2b526468f58\") " Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.120652 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host" (OuterVolumeSpecName: "host") pod "4c254f86-58d8-4456-b02c-e2b526468f58" (UID: "4c254f86-58d8-4456-b02c-e2b526468f58"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.120872 4751 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c254f86-58d8-4456-b02c-e2b526468f58-host\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.131905 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq" (OuterVolumeSpecName: "kube-api-access-rzlhq") pod "4c254f86-58d8-4456-b02c-e2b526468f58" (UID: "4c254f86-58d8-4456-b02c-e2b526468f58"). InnerVolumeSpecName "kube-api-access-rzlhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.223022 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzlhq\" (UniqueName: \"kubernetes.io/projected/4c254f86-58d8-4456-b02c-e2b526468f58-kube-api-access-rzlhq\") on node \"crc\" DevicePath \"\"" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.956223 4751 scope.go:117] "RemoveContainer" containerID="c848fff1e8a3ee8d73f4c715c522c8053796af6c00d8402fdbcd1c4ad558de1d" Nov 23 05:01:19 crc kubenswrapper[4751]: I1123 05:01:19.956496 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/crc-debug-2vdf7" Nov 23 05:01:20 crc kubenswrapper[4751]: I1123 05:01:20.653570 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c254f86-58d8-4456-b02c-e2b526468f58" path="/var/lib/kubelet/pods/4c254f86-58d8-4456-b02c-e2b526468f58/volumes" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.140838 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b8f7cfdb6-q2828_356c133f-02f2-453d-a0a4-018aa4741eee/barbican-api/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.263489 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6b8f7cfdb6-q2828_356c133f-02f2-453d-a0a4-018aa4741eee/barbican-api-log/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.357340 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dbdd4878-rj2c4_e290c0f4-7b34-4063-a8f8-aa5123762b03/barbican-keystone-listener/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.407521 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65dbdd4878-rj2c4_e290c0f4-7b34-4063-a8f8-aa5123762b03/barbican-keystone-listener-log/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.646612 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6898967-wnj7z_1babe827-384d-4185-90fb-021a93e62b38/barbican-worker-log/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.695886 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6898967-wnj7z_1babe827-384d-4185-90fb-021a93e62b38/barbican-worker/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.827821 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9wmkl_ac73cf10-7aa5-4958-9238-d5473d368ceb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.914699 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/ceilometer-central-agent/0.log" Nov 23 05:01:42 crc kubenswrapper[4751]: I1123 05:01:42.998426 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/ceilometer-notification-agent/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.010450 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/proxy-httpd/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.032374 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5c883930-39a6-4aa2-8be9-08ddb0d187e8/sg-core/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.243854 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f/cinder-api-log/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.261668 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_84b707eb-3fc6-4c5e-a67e-35eb64bf0d6f/cinder-api/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.437592 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ba61bff1-41f9-4e95-bde0-0da7b4000a1c/probe/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.516321 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ba61bff1-41f9-4e95-bde0-0da7b4000a1c/cinder-scheduler/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.517388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-9xskg_d8c7c9fe-7d35-413c-8738-31ec126e8d80/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.748500 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dxd4x_ddafc7c0-5c18-49f0-b609-f68959f5bc29/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.759747 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/init/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.904862 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/init/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.977133 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-99kcg_5a46c73e-f53a-4bcc-8a3a-d5982ecc6649/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:43 crc kubenswrapper[4751]: I1123 05:01:43.994840 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-gt9x6_d97d28a3-afb1-41a4-b623-ed9e4b89ca31/dnsmasq-dns/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.186240 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_06ebe447-bb04-442d-9fdf-752c1dd5a747/glance-log/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.201894 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_06ebe447-bb04-442d-9fdf-752c1dd5a747/glance-httpd/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.361239 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3c09a9e4-3f1b-4732-9b6b-fcd54fe21650/glance-httpd/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.399798 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3c09a9e4-3f1b-4732-9b6b-fcd54fe21650/glance-log/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.537857 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-789489d584-slcs8_49f1490c-4b27-47c0-bc36-688b467ebe2c/horizon/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.656221 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-n5mqf_3068980f-3607-43b3-b505-d4663202d8dd/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.888364 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dm6wp_42446795-8c4f-4b34-b87c-63fc5306226e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:44 crc kubenswrapper[4751]: I1123 05:01:44.942656 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-789489d584-slcs8_49f1490c-4b27-47c0-bc36-688b467ebe2c/horizon-log/0.log" Nov 23 05:01:45 crc kubenswrapper[4751]: I1123 05:01:45.242117 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66b57bb577-p2b4n_bf363ce8-cc62-4c00-90f1-adfe3e26e834/keystone-api/0.log" Nov 23 05:01:45 crc kubenswrapper[4751]: I1123 05:01:45.894047 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29397901-khbbr_8df5b4a7-b9ad-4335-9b22-a7d735f70f6f/keystone-cron/0.log" Nov 23 05:01:45 crc kubenswrapper[4751]: I1123 05:01:45.936863 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_edbd1755-69d6-4ae1-809a-a64203c0c090/kube-state-metrics/0.log" Nov 23 05:01:46 crc kubenswrapper[4751]: I1123 05:01:46.016995 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-44fjn_b1b8004d-68f3-41c1-ac68-2b35a527fd88/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:46 crc kubenswrapper[4751]: I1123 05:01:46.349774 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64b84b8669-6xvhn_61f0356e-5917-45f1-86a3-75f15d10ac71/neutron-httpd/0.log" Nov 23 05:01:46 crc kubenswrapper[4751]: I1123 05:01:46.353444 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-64b84b8669-6xvhn_61f0356e-5917-45f1-86a3-75f15d10ac71/neutron-api/0.log" Nov 23 05:01:46 crc kubenswrapper[4751]: I1123 05:01:46.383363 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c6rz9_e8f0c75e-2965-4ab3-841c-aae06611df5a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:46 crc kubenswrapper[4751]: I1123 05:01:46.937834 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4e65a-c2e7-4040-9230-063782c96cca/nova-api-log/0.log" Nov 23 05:01:47 crc kubenswrapper[4751]: I1123 05:01:47.099950 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1085be93-49b1-4d78-818d-ef37248136f4/nova-cell0-conductor-conductor/0.log" Nov 23 05:01:47 crc kubenswrapper[4751]: I1123 05:01:47.495360 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a2b4e65a-c2e7-4040-9230-063782c96cca/nova-api-api/0.log" Nov 23 05:01:47 crc kubenswrapper[4751]: I1123 05:01:47.593647 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_c053622f-478e-4de7-9a6b-43c86c5ada7b/nova-cell1-conductor-conductor/0.log" Nov 23 05:01:47 crc kubenswrapper[4751]: I1123 05:01:47.790910 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c7b90669-75e1-4f89-860f-7dfa61d6fa48/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 05:01:47 crc kubenswrapper[4751]: I1123 05:01:47.792027 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-4nzvn_88fdef25-3ea0-48cf-8c54-22776698b6dc/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.027860 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b98d09fb-41ac-4b07-8334-72a33cf11ba6/nova-metadata-log/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.276827 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/mysql-bootstrap/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.367536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1d5c19e2-e749-4c94-b8ce-04b9a34b65ff/nova-scheduler-scheduler/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.475355 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/mysql-bootstrap/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.530253 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_44c38b4f-095f-46ec-8a95-d7335e696f1b/galera/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.710206 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/mysql-bootstrap/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.869666 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/galera/0.log" Nov 23 05:01:48 crc kubenswrapper[4751]: I1123 05:01:48.877870 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f73a5c1f-fac1-4b2d-9611-819ac8ebd57a/mysql-bootstrap/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.047107 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d8f1f72f-cb69-43a3-8f06-1f348a731330/openstackclient/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.156926 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-v5k4s_eeb714f2-5bf4-436c-9a9a-62b9ebfb37c1/openstack-network-exporter/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.319762 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server-init/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.449072 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b98d09fb-41ac-4b07-8334-72a33cf11ba6/nova-metadata-metadata/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.505423 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server-init/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.510824 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovsdb-server/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.554888 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-26bzb_11513e97-ce99-4112-bf99-386d0074fc15/ovs-vswitchd/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.682879 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x65b7_4e8bfa9a-1b92-428e-a443-21ccb190a5bd/ovn-controller/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.767966 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5q5kr_dd88992f-1e56-48ed-913c-4ecd0fc20767/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.948413 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_95cf38ba-5edd-4ff7-9213-966b6498df4e/openstack-network-exporter/0.log" Nov 23 05:01:49 crc kubenswrapper[4751]: I1123 05:01:49.953636 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_95cf38ba-5edd-4ff7-9213-966b6498df4e/ovn-northd/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.091654 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28/openstack-network-exporter/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.123088 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_eff6a1ce-1fe9-4ae0-882a-aadeb5e35d28/ovsdbserver-nb/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.215869 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d49c307d-c4e7-412a-9506-71b93c1a1557/openstack-network-exporter/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.278104 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d49c307d-c4e7-412a-9506-71b93c1a1557/ovsdbserver-sb/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.455914 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d9c6b99fd-4x95v_748e93b6-b72d-4fd1-8542-b37b5d4d7031/placement-api/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.546156 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/setup-container/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.641932 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d9c6b99fd-4x95v_748e93b6-b72d-4fd1-8542-b37b5d4d7031/placement-log/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.756482 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/setup-container/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.784890 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2ebb2468-5894-4d38-ac88-10033af58026/rabbitmq/0.log" Nov 23 05:01:50 crc kubenswrapper[4751]: I1123 05:01:50.863013 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/setup-container/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.020233 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/setup-container/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.074172 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pv9kg_be595bce-317a-48c8-949e-2947f0954d0b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.124885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bcb1b3bf-2ace-42f9-845f-8b993051016b/rabbitmq/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.246994 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7c89h_ffccd27d-7f9b-49be-9f33-078fc7cdfe25/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.363393 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lsgf6_381c6054-1b64-48db-81d6-12e6a95dcbe2/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.536051 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-p922l_98681550-3696-4f63-a16d-edaf78bf06fb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.618237 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9cnpr_53cbfe3d-8559-41aa-8352-5480c56e3624/ssh-known-hosts-edpm-deployment/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.859168 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-755d45c5-9j9lj_69aeb1a6-d144-470d-8b47-f4e0126bd9fb/proxy-server/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.914052 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-755d45c5-9j9lj_69aeb1a6-d144-470d-8b47-f4e0126bd9fb/proxy-httpd/0.log" Nov 23 05:01:51 crc kubenswrapper[4751]: I1123 05:01:51.985824 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-d526x_cc27467d-f028-4378-8e74-84b22dbc0048/swift-ring-rebalance/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.093644 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-auditor/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.194659 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-reaper/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.229816 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-replicator/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.301648 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/account-server/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.311758 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-auditor/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.425930 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-replicator/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.443096 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-server/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.522826 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/container-updater/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.545461 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-auditor/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.625373 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-expirer/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.707372 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-server/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.715544 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-replicator/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.776973 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/object-updater/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.880670 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/rsync/0.log" Nov 23 05:01:52 crc kubenswrapper[4751]: I1123 05:01:52.939163 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea516dc6-70bc-461c-8b7e-e269f9287da4/swift-recon-cron/0.log" Nov 23 05:01:53 crc kubenswrapper[4751]: I1123 05:01:53.035745 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xpqgn_8d72beb8-693c-4168-99d2-219a12911413/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:01:53 crc kubenswrapper[4751]: I1123 05:01:53.206319 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_941e3bda-6f4a-481b-bb73-1c531d70607e/tempest-tests-tempest-tests-runner/0.log" Nov 23 05:01:53 crc kubenswrapper[4751]: I1123 05:01:53.259883 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_9222b521-983b-458c-b312-411689b31bec/test-operator-logs-container/0.log" Nov 23 05:01:53 crc kubenswrapper[4751]: I1123 05:01:53.402196 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5nm9v_8e5d5738-2df6-456a-b038-9605e0da3b66/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 23 05:02:04 crc kubenswrapper[4751]: I1123 05:02:04.257762 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3125267d-8f09-4e74-90e2-a8f85e538b86/memcached/0.log" Nov 23 05:02:08 crc kubenswrapper[4751]: I1123 05:02:08.114788 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:02:08 crc kubenswrapper[4751]: I1123 05:02:08.115841 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.563034 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:14 crc kubenswrapper[4751]: E1123 05:02:14.564018 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c254f86-58d8-4456-b02c-e2b526468f58" containerName="container-00" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.564033 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c254f86-58d8-4456-b02c-e2b526468f58" containerName="container-00" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.564306 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c254f86-58d8-4456-b02c-e2b526468f58" containerName="container-00" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.566398 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.573708 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.701647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7j9m\" (UniqueName: \"kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.701969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.702086 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.803485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.803533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.803756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7j9m\" (UniqueName: \"kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.803937 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.805659 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.823515 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7j9m\" (UniqueName: \"kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m\") pod \"certified-operators-rbqz4\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:14 crc kubenswrapper[4751]: I1123 05:02:14.895585 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:15 crc kubenswrapper[4751]: I1123 05:02:15.393853 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:15 crc kubenswrapper[4751]: I1123 05:02:15.479322 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerStarted","Data":"72453f7e2f293aae80f106bfc16a6b023e8b484822e83efba228041a20d846e0"} Nov 23 05:02:16 crc kubenswrapper[4751]: I1123 05:02:16.490228 4751 generic.go:334] "Generic (PLEG): container finished" podID="7eb77113-c29a-479d-a144-08f4da4e0881" containerID="a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5" exitCode=0 Nov 23 05:02:16 crc kubenswrapper[4751]: I1123 05:02:16.490452 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerDied","Data":"a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5"} Nov 23 05:02:17 crc kubenswrapper[4751]: I1123 05:02:17.501102 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerStarted","Data":"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738"} Nov 23 05:02:18 crc kubenswrapper[4751]: I1123 05:02:18.517571 4751 generic.go:334] "Generic (PLEG): container finished" podID="7eb77113-c29a-479d-a144-08f4da4e0881" containerID="af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738" exitCode=0 Nov 23 05:02:18 crc kubenswrapper[4751]: I1123 05:02:18.517699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerDied","Data":"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738"} Nov 23 05:02:19 crc kubenswrapper[4751]: I1123 05:02:19.529044 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerStarted","Data":"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708"} Nov 23 05:02:19 crc kubenswrapper[4751]: I1123 05:02:19.554135 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rbqz4" podStartSLOduration=3.144387809 podStartE2EDuration="5.554119144s" podCreationTimestamp="2025-11-23 05:02:14 +0000 UTC" firstStartedPulling="2025-11-23 05:02:16.492403669 +0000 UTC m=+4032.686075048" lastFinishedPulling="2025-11-23 05:02:18.902135014 +0000 UTC m=+4035.095806383" observedRunningTime="2025-11-23 05:02:19.547279328 +0000 UTC m=+4035.740950677" watchObservedRunningTime="2025-11-23 05:02:19.554119144 +0000 UTC m=+4035.747790503" Nov 23 05:02:20 crc kubenswrapper[4751]: I1123 05:02:20.935651 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.081068 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.097260 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.121916 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.279683 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/pull/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.283796 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/util/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.332202 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6d24f72cafbf2dc1f2727206e3843dafdd53dae5e71c66839d1243f2096p9pk_253da625-d87e-4a1c-823d-dd201b0fc1bf/extract/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.466620 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5jtfh_18a8d35c-05ad-4057-b88e-e5f0d417678f/kube-rbac-proxy/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.503509 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-5hn9n_9762a11c-fd59-489a-9a95-7725f4d1c9e4/kube-rbac-proxy/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.511640 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-5jtfh_18a8d35c-05ad-4057-b88e-e5f0d417678f/manager/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.732725 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-5hn9n_9762a11c-fd59-489a-9a95-7725f4d1c9e4/manager/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.748440 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vrm2h_5323f8b0-18ba-42eb-9a73-ee25c2592aea/manager/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.772085 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-vrm2h_5323f8b0-18ba-42eb-9a73-ee25c2592aea/kube-rbac-proxy/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.912284 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-hlrq4_934b92b0-4c8a-48d8-8514-ffe3d566b58d/kube-rbac-proxy/0.log" Nov 23 05:02:21 crc kubenswrapper[4751]: I1123 05:02:21.982444 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-hlrq4_934b92b0-4c8a-48d8-8514-ffe3d566b58d/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.035318 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-wh2rd_3ce24a56-0cc8-4f63-91da-dde87342529b/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.157400 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-wh2rd_3ce24a56-0cc8-4f63-91da-dde87342529b/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.161090 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-js2kh_abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.232621 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-js2kh_abfd2e26-6ab0-4fa5-8d93-c5da3654aaaa/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.335470 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-447nt_59fa9d8a-cc64-478a-be71-fda41132aca9/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.475767 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-447nt_59fa9d8a-cc64-478a-be71-fda41132aca9/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.527423 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-7pqmh_8e942272-3be5-4bec-b764-ab18709fbb4d/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.551656 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-7pqmh_8e942272-3be5-4bec-b764-ab18709fbb4d/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.637004 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h6524_09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.769296 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h6524_09d7ed0f-e2f8-4d49-8d23-41a7a4b900fb/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.813281 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5755c_d877cc36-10d3-4ba0-8140-ad4f89a2b855/kube-rbac-proxy/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.843537 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-5755c_d877cc36-10d3-4ba0-8140-ad4f89a2b855/manager/0.log" Nov 23 05:02:22 crc kubenswrapper[4751]: I1123 05:02:22.923130 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-rr4vk_f65918da-cd67-4138-bcb4-1316d398b30e/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.012168 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-rr4vk_f65918da-cd67-4138-bcb4-1316d398b30e/manager/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.109807 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-6dwlq_c1d9b3d4-a044-46e5-be2c-463d728a4c5d/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.241915 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-6dwlq_c1d9b3d4-a044-46e5-be2c-463d728a4c5d/manager/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.305495 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-nw29b_3fe1e718-5530-4899-9a28-fbaa27ed08f4/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.357054 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-nw29b_3fe1e718-5530-4899-9a28-fbaa27ed08f4/manager/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.435068 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-zstqk_7f713385-4fd0-462a-8812-ae2cc7ad910b/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.524877 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-zstqk_7f713385-4fd0-462a-8812-ae2cc7ad910b/manager/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.617701 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq_5176792c-6b3a-46cc-9ddb-5416391ce264/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.644552 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-wq7dq_5176792c-6b3a-46cc-9ddb-5416391ce264/manager/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.707696 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5849b9999b-qsxxk_96b8f5ab-6091-4283-b8ff-76a80465d4a0/kube-rbac-proxy/0.log" Nov 23 05:02:23 crc kubenswrapper[4751]: I1123 05:02:23.883069 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-549d6967c7-krhsr_292f5bac-dc69-4084-814f-509540c16426/kube-rbac-proxy/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.052076 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zm8mr_906bbca3-2aaf-47a8-ba3e-ee004ca911d6/registry-server/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.072193 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-549d6967c7-krhsr_292f5bac-dc69-4084-814f-509540c16426/operator/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.282062 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-scc77_d96aa695-fa54-4828-b37d-9c4e5121344a/kube-rbac-proxy/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.379142 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-scc77_d96aa695-fa54-4828-b37d-9c4e5121344a/manager/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.531478 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-58plx_4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d/kube-rbac-proxy/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.534528 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-58plx_4f03c5c8-8a2d-43df-be5b-5c61b6ebf84d/manager/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.599828 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-4r7vd_0d5ce886-fa22-4fc1-a369-0311b8a22353/operator/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.796006 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-q87hr_b49833ae-797b-42aa-aa69-4ddc939dcad6/kube-rbac-proxy/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.829052 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-q87hr_b49833ae-797b-42aa-aa69-4ddc939dcad6/manager/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.842146 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5849b9999b-qsxxk_96b8f5ab-6091-4283-b8ff-76a80465d4a0/manager/0.log" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.896672 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.896945 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.944559 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:24 crc kubenswrapper[4751]: I1123 05:02:24.990306 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8ldr4_763c6ab4-8128-46d4-9c53-45c1b2cd7ecc/kube-rbac-proxy/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.026242 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-8ldr4_763c6ab4-8128-46d4-9c53-45c1b2cd7ecc/manager/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.051523 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-7tkw8_79f85f0e-fcac-4778-8dac-0a2953ba9c8d/kube-rbac-proxy/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.056738 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-7tkw8_79f85f0e-fcac-4778-8dac-0a2953ba9c8d/manager/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.157175 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-p76zg_fd3d207d-1aae-49de-984e-ca3ebf42f864/kube-rbac-proxy/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.201403 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-p76zg_fd3d207d-1aae-49de-984e-ca3ebf42f864/manager/0.log" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.626071 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:25 crc kubenswrapper[4751]: I1123 05:02:25.681174 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:27 crc kubenswrapper[4751]: I1123 05:02:27.594690 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rbqz4" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="registry-server" containerID="cri-o://d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708" gracePeriod=2 Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.380865 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.565086 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content\") pod \"7eb77113-c29a-479d-a144-08f4da4e0881\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.565216 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7j9m\" (UniqueName: \"kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m\") pod \"7eb77113-c29a-479d-a144-08f4da4e0881\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.565385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities\") pod \"7eb77113-c29a-479d-a144-08f4da4e0881\" (UID: \"7eb77113-c29a-479d-a144-08f4da4e0881\") " Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.566061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities" (OuterVolumeSpecName: "utilities") pod "7eb77113-c29a-479d-a144-08f4da4e0881" (UID: "7eb77113-c29a-479d-a144-08f4da4e0881"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.570385 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m" (OuterVolumeSpecName: "kube-api-access-j7j9m") pod "7eb77113-c29a-479d-a144-08f4da4e0881" (UID: "7eb77113-c29a-479d-a144-08f4da4e0881"). InnerVolumeSpecName "kube-api-access-j7j9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.604043 4751 generic.go:334] "Generic (PLEG): container finished" podID="7eb77113-c29a-479d-a144-08f4da4e0881" containerID="d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708" exitCode=0 Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.604088 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerDied","Data":"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708"} Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.604110 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbqz4" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.604150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbqz4" event={"ID":"7eb77113-c29a-479d-a144-08f4da4e0881","Type":"ContainerDied","Data":"72453f7e2f293aae80f106bfc16a6b023e8b484822e83efba228041a20d846e0"} Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.604175 4751 scope.go:117] "RemoveContainer" containerID="d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.613365 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eb77113-c29a-479d-a144-08f4da4e0881" (UID: "7eb77113-c29a-479d-a144-08f4da4e0881"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.634616 4751 scope.go:117] "RemoveContainer" containerID="af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.668101 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.668137 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7j9m\" (UniqueName: \"kubernetes.io/projected/7eb77113-c29a-479d-a144-08f4da4e0881-kube-api-access-j7j9m\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.668149 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb77113-c29a-479d-a144-08f4da4e0881-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.678810 4751 scope.go:117] "RemoveContainer" containerID="a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.715879 4751 scope.go:117] "RemoveContainer" containerID="d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708" Nov 23 05:02:28 crc kubenswrapper[4751]: E1123 05:02:28.716485 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708\": container with ID starting with d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708 not found: ID does not exist" containerID="d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.716527 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708"} err="failed to get container status \"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708\": rpc error: code = NotFound desc = could not find container \"d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708\": container with ID starting with d71bd28b76834943128dc7525e92a72b5210fd148bd61d25b34751c69be43708 not found: ID does not exist" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.716555 4751 scope.go:117] "RemoveContainer" containerID="af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738" Nov 23 05:02:28 crc kubenswrapper[4751]: E1123 05:02:28.717067 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738\": container with ID starting with af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738 not found: ID does not exist" containerID="af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.717095 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738"} err="failed to get container status \"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738\": rpc error: code = NotFound desc = could not find container \"af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738\": container with ID starting with af3b11fe801c26054e10a0513b76a7ef8e09441591d819e552556e145b829738 not found: ID does not exist" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.717119 4751 scope.go:117] "RemoveContainer" containerID="a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5" Nov 23 05:02:28 crc kubenswrapper[4751]: E1123 05:02:28.717451 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5\": container with ID starting with a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5 not found: ID does not exist" containerID="a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.717499 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5"} err="failed to get container status \"a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5\": rpc error: code = NotFound desc = could not find container \"a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5\": container with ID starting with a4bd8ca5d5b8199927f8e83a52b1a50938aae9ffad52ac96de69138e53a198e5 not found: ID does not exist" Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.924118 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:28 crc kubenswrapper[4751]: I1123 05:02:28.930804 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rbqz4"] Nov 23 05:02:30 crc kubenswrapper[4751]: I1123 05:02:30.654203 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" path="/var/lib/kubelet/pods/7eb77113-c29a-479d-a144-08f4da4e0881/volumes" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.362439 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:36 crc kubenswrapper[4751]: E1123 05:02:36.363558 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="extract-utilities" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.363577 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="extract-utilities" Nov 23 05:02:36 crc kubenswrapper[4751]: E1123 05:02:36.363621 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="extract-content" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.363630 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="extract-content" Nov 23 05:02:36 crc kubenswrapper[4751]: E1123 05:02:36.363654 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="registry-server" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.363663 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="registry-server" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.363883 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb77113-c29a-479d-a144-08f4da4e0881" containerName="registry-server" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.365689 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.386918 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.514740 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.514860 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.515074 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqtpq\" (UniqueName: \"kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.617619 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.617787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqtpq\" (UniqueName: \"kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.617872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.618125 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.618364 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.639313 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqtpq\" (UniqueName: \"kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq\") pod \"redhat-marketplace-rcnd8\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:36 crc kubenswrapper[4751]: I1123 05:02:36.688853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:37 crc kubenswrapper[4751]: I1123 05:02:37.152818 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:37 crc kubenswrapper[4751]: E1123 05:02:37.607728 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c5878c_ea3a_4396_9523_2fef4cf5c414.slice/crio-conmon-d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c5878c_ea3a_4396_9523_2fef4cf5c414.slice/crio-d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54.scope\": RecentStats: unable to find data in memory cache]" Nov 23 05:02:37 crc kubenswrapper[4751]: I1123 05:02:37.690501 4751 generic.go:334] "Generic (PLEG): container finished" podID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerID="d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54" exitCode=0 Nov 23 05:02:37 crc kubenswrapper[4751]: I1123 05:02:37.690609 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerDied","Data":"d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54"} Nov 23 05:02:37 crc kubenswrapper[4751]: I1123 05:02:37.691155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerStarted","Data":"2c362b43377913b5f7389fb98912fe6daf9ab5504fc2d12370bb84e86765827d"} Nov 23 05:02:38 crc kubenswrapper[4751]: I1123 05:02:38.114785 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:02:38 crc kubenswrapper[4751]: I1123 05:02:38.115197 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:02:38 crc kubenswrapper[4751]: I1123 05:02:38.701154 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerStarted","Data":"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307"} Nov 23 05:02:39 crc kubenswrapper[4751]: I1123 05:02:39.711728 4751 generic.go:334] "Generic (PLEG): container finished" podID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerID="e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307" exitCode=0 Nov 23 05:02:39 crc kubenswrapper[4751]: I1123 05:02:39.711850 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerDied","Data":"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307"} Nov 23 05:02:40 crc kubenswrapper[4751]: I1123 05:02:40.728553 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerStarted","Data":"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed"} Nov 23 05:02:40 crc kubenswrapper[4751]: I1123 05:02:40.758506 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rcnd8" podStartSLOduration=2.325055296 podStartE2EDuration="4.75847989s" podCreationTimestamp="2025-11-23 05:02:36 +0000 UTC" firstStartedPulling="2025-11-23 05:02:37.693278196 +0000 UTC m=+4053.886949555" lastFinishedPulling="2025-11-23 05:02:40.12670278 +0000 UTC m=+4056.320374149" observedRunningTime="2025-11-23 05:02:40.754903629 +0000 UTC m=+4056.948575038" watchObservedRunningTime="2025-11-23 05:02:40.75847989 +0000 UTC m=+4056.952151289" Nov 23 05:02:43 crc kubenswrapper[4751]: I1123 05:02:43.251617 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cbnj9_fa7612e7-e0b7-4b66-a948-fc5bc3aa3033/control-plane-machine-set-operator/0.log" Nov 23 05:02:43 crc kubenswrapper[4751]: I1123 05:02:43.438817 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8kg8p_b65a94d8-c328-457e-ac66-f6d62f592d55/machine-api-operator/0.log" Nov 23 05:02:43 crc kubenswrapper[4751]: I1123 05:02:43.443054 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8kg8p_b65a94d8-c328-457e-ac66-f6d62f592d55/kube-rbac-proxy/0.log" Nov 23 05:02:46 crc kubenswrapper[4751]: I1123 05:02:46.689198 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:46 crc kubenswrapper[4751]: I1123 05:02:46.689615 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:46 crc kubenswrapper[4751]: I1123 05:02:46.734700 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:46 crc kubenswrapper[4751]: I1123 05:02:46.840249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:46 crc kubenswrapper[4751]: I1123 05:02:46.970566 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:48 crc kubenswrapper[4751]: I1123 05:02:48.801568 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rcnd8" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="registry-server" containerID="cri-o://f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed" gracePeriod=2 Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.260037 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.367485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities\") pod \"59c5878c-ea3a-4396-9523-2fef4cf5c414\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.367582 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqtpq\" (UniqueName: \"kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq\") pod \"59c5878c-ea3a-4396-9523-2fef4cf5c414\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.367627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content\") pod \"59c5878c-ea3a-4396-9523-2fef4cf5c414\" (UID: \"59c5878c-ea3a-4396-9523-2fef4cf5c414\") " Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.368834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities" (OuterVolumeSpecName: "utilities") pod "59c5878c-ea3a-4396-9523-2fef4cf5c414" (UID: "59c5878c-ea3a-4396-9523-2fef4cf5c414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.380536 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq" (OuterVolumeSpecName: "kube-api-access-tqtpq") pod "59c5878c-ea3a-4396-9523-2fef4cf5c414" (UID: "59c5878c-ea3a-4396-9523-2fef4cf5c414"). InnerVolumeSpecName "kube-api-access-tqtpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.401033 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59c5878c-ea3a-4396-9523-2fef4cf5c414" (UID: "59c5878c-ea3a-4396-9523-2fef4cf5c414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.469632 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqtpq\" (UniqueName: \"kubernetes.io/projected/59c5878c-ea3a-4396-9523-2fef4cf5c414-kube-api-access-tqtpq\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.469673 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.469686 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c5878c-ea3a-4396-9523-2fef4cf5c414-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.825909 4751 generic.go:334] "Generic (PLEG): container finished" podID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerID="f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed" exitCode=0 Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.825968 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerDied","Data":"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed"} Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.826006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcnd8" event={"ID":"59c5878c-ea3a-4396-9523-2fef4cf5c414","Type":"ContainerDied","Data":"2c362b43377913b5f7389fb98912fe6daf9ab5504fc2d12370bb84e86765827d"} Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.826034 4751 scope.go:117] "RemoveContainer" containerID="f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.826199 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcnd8" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.856245 4751 scope.go:117] "RemoveContainer" containerID="e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.893910 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.898049 4751 scope.go:117] "RemoveContainer" containerID="d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.912292 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcnd8"] Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.940938 4751 scope.go:117] "RemoveContainer" containerID="f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed" Nov 23 05:02:49 crc kubenswrapper[4751]: E1123 05:02:49.941566 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed\": container with ID starting with f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed not found: ID does not exist" containerID="f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.941622 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed"} err="failed to get container status \"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed\": rpc error: code = NotFound desc = could not find container \"f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed\": container with ID starting with f3cf45f7621cea7aa6aecd5d35eae2fc717fd431fe2d415b1e50f9d5efda0bed not found: ID does not exist" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.941658 4751 scope.go:117] "RemoveContainer" containerID="e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307" Nov 23 05:02:49 crc kubenswrapper[4751]: E1123 05:02:49.942073 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307\": container with ID starting with e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307 not found: ID does not exist" containerID="e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.942170 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307"} err="failed to get container status \"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307\": rpc error: code = NotFound desc = could not find container \"e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307\": container with ID starting with e2c98ab589316d0ccacee6130e6b7ac78ef6d742575f9ca71c411dca58cda307 not found: ID does not exist" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.942219 4751 scope.go:117] "RemoveContainer" containerID="d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54" Nov 23 05:02:49 crc kubenswrapper[4751]: E1123 05:02:49.942687 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54\": container with ID starting with d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54 not found: ID does not exist" containerID="d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54" Nov 23 05:02:49 crc kubenswrapper[4751]: I1123 05:02:49.942730 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54"} err="failed to get container status \"d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54\": rpc error: code = NotFound desc = could not find container \"d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54\": container with ID starting with d8c5b97c9bf2464d47beafeb2b13dcebe87e963613c516b5a032906a9b20dd54 not found: ID does not exist" Nov 23 05:02:50 crc kubenswrapper[4751]: I1123 05:02:50.655997 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" path="/var/lib/kubelet/pods/59c5878c-ea3a-4396-9523-2fef4cf5c414/volumes" Nov 23 05:02:55 crc kubenswrapper[4751]: I1123 05:02:55.982806 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-52xvz_fcc30abb-9ab6-4b0f-b27c-8772f6026dd7/cert-manager-controller/0.log" Nov 23 05:02:56 crc kubenswrapper[4751]: I1123 05:02:56.196123 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-n7m2q_ac86f9c3-7a1c-430f-abdc-3002de03a7df/cert-manager-cainjector/0.log" Nov 23 05:02:56 crc kubenswrapper[4751]: I1123 05:02:56.214890 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-rcqpp_6a01fd13-4ae3-4ea2-9fc1-e79f4b31e7a3/cert-manager-webhook/0.log" Nov 23 05:03:08 crc kubenswrapper[4751]: I1123 05:03:08.114102 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:03:08 crc kubenswrapper[4751]: I1123 05:03:08.114765 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:03:08 crc kubenswrapper[4751]: I1123 05:03:08.114810 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 05:03:08 crc kubenswrapper[4751]: I1123 05:03:08.115494 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 05:03:08 crc kubenswrapper[4751]: I1123 05:03:08.115562 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60" gracePeriod=600 Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.032732 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60" exitCode=0 Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.032930 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60"} Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.033232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerStarted","Data":"3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c"} Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.033270 4751 scope.go:117] "RemoveContainer" containerID="3f043e082ec4b8b0f8082248c15a63e30050c7c32ac9b95c9b336484153094d9" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.551947 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-flnqj_6b49364f-9a9b-4be9-b128-1a1b708073cc/nmstate-console-plugin/0.log" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.701910 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q9z6z_ac9fa491-4c47-4862-bb2f-96dd556da176/nmstate-handler/0.log" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.732059 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-k5jpj_b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5/kube-rbac-proxy/0.log" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.769920 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-k5jpj_b42c1e88-21f9-4d2b-86dc-b6ed330d3eb5/nmstate-metrics/0.log" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.881697 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-dhvb2_9dff0d36-e2c4-4a48-a395-4ef9cae05540/nmstate-operator/0.log" Nov 23 05:03:09 crc kubenswrapper[4751]: I1123 05:03:09.969931 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-8tdcc_67521947-5803-47c5-95ee-ff1331b80d30/nmstate-webhook/0.log" Nov 23 05:03:25 crc kubenswrapper[4751]: I1123 05:03:25.657004 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kht5_0a9bcd23-2927-40fc-be78-28a85fd0c43c/kube-rbac-proxy/0.log" Nov 23 05:03:25 crc kubenswrapper[4751]: I1123 05:03:25.795856 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kht5_0a9bcd23-2927-40fc-be78-28a85fd0c43c/controller/0.log" Nov 23 05:03:25 crc kubenswrapper[4751]: I1123 05:03:25.854964 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.043755 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.072693 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.079798 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.081050 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.237127 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.269546 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.271454 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.274735 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.449951 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-metrics/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.453660 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-frr-files/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.462425 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/controller/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.526423 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/cp-reloader/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.604281 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/kube-rbac-proxy/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.623165 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/frr-metrics/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.726088 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/kube-rbac-proxy-frr/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.799909 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/reloader/0.log" Nov 23 05:03:26 crc kubenswrapper[4751]: I1123 05:03:26.918130 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-64sx8_52ebaa08-f93a-422b-8c95-728f7ad4a20c/frr-k8s-webhook-server/0.log" Nov 23 05:03:27 crc kubenswrapper[4751]: I1123 05:03:27.449330 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7577964656-7fb5v_5f06ddd2-0977-4bb4-954a-8bff2da8d49a/webhook-server/0.log" Nov 23 05:03:27 crc kubenswrapper[4751]: I1123 05:03:27.495803 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-859f4d786d-lx7n9_ed440de8-4a60-48c8-85e5-a0431415aa1e/manager/0.log" Nov 23 05:03:27 crc kubenswrapper[4751]: I1123 05:03:27.732925 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ps8wm_24d322b0-264c-482c-9daa-9ee340079d1f/kube-rbac-proxy/0.log" Nov 23 05:03:27 crc kubenswrapper[4751]: I1123 05:03:27.816999 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dgb9d_f2261f62-a80e-45a3-8ab8-b72f43f53d73/frr/0.log" Nov 23 05:03:28 crc kubenswrapper[4751]: I1123 05:03:28.099814 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-ps8wm_24d322b0-264c-482c-9daa-9ee340079d1f/speaker/0.log" Nov 23 05:03:40 crc kubenswrapper[4751]: I1123 05:03:40.872202 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.098600 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.101407 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.113904 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.299484 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/extract/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.300971 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/util/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.306911 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ef45dl_99fc265a-c0ec-49a7-a273-192d173d25df/pull/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.483689 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.641317 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.649218 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.685878 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.857299 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-content/0.log" Nov 23 05:03:41 crc kubenswrapper[4751]: I1123 05:03:41.888008 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/extract-utilities/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.022698 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.336747 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.350120 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lzcgv_79866610-f6cc-4403-822f-6e76628ed0ad/registry-server/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.368102 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.369718 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.534699 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-content/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.601144 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/extract-utilities/0.log" Nov 23 05:03:42 crc kubenswrapper[4751]: I1123 05:03:42.728842 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.038037 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.048169 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.056595 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kfgvt_1f021228-7e3a-4286-a412-59792b2938ce/registry-server/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.081991 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.245431 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/util/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.261880 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/extract/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.278770 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6x7bsv_f23641b9-2eca-418a-90e9-13dd56c87cb8/pull/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.462131 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.469365 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b4zhm_77b14a1a-54d8-4706-95b6-2b94d8dffa43/marketplace-operator/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.635562 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.655096 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.662150 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.870019 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-utilities/0.log" Nov 23 05:03:43 crc kubenswrapper[4751]: I1123 05:03:43.872151 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/extract-content/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.028771 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mqtml_373f0c7f-d0d9-49c1-9f9e-6fdc3e6c7453/registry-server/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.053495 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.200982 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.229921 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.251180 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.420981 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-utilities/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.449316 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/extract-content/0.log" Nov 23 05:03:44 crc kubenswrapper[4751]: I1123 05:03:44.890442 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-swqk5_1e37d630-83e9-4049-9b40-b98132ab891b/registry-server/0.log" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.502665 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:02 crc kubenswrapper[4751]: E1123 05:04:02.503802 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="extract-utilities" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.503818 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="extract-utilities" Nov 23 05:04:02 crc kubenswrapper[4751]: E1123 05:04:02.503848 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="extract-content" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.503855 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="extract-content" Nov 23 05:04:02 crc kubenswrapper[4751]: E1123 05:04:02.503870 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="registry-server" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.503877 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="registry-server" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.504088 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c5878c-ea3a-4396-9523-2fef4cf5c414" containerName="registry-server" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.506945 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.516066 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.588297 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.588618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.588694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxlg9\" (UniqueName: \"kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.689907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.690218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.690452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.690581 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxlg9\" (UniqueName: \"kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.691115 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.717585 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxlg9\" (UniqueName: \"kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9\") pod \"redhat-operators-t2ws4\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:02 crc kubenswrapper[4751]: I1123 05:04:02.833925 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:03 crc kubenswrapper[4751]: I1123 05:04:03.369847 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:03 crc kubenswrapper[4751]: W1123 05:04:03.375215 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3854e2aa_16fe_4eb5_8958_b43e8103dc19.slice/crio-aa0b7b3beba4ba7979f444b9ba9dca8f98b5372439ab07f74fdf0fa81ea49130 WatchSource:0}: Error finding container aa0b7b3beba4ba7979f444b9ba9dca8f98b5372439ab07f74fdf0fa81ea49130: Status 404 returned error can't find the container with id aa0b7b3beba4ba7979f444b9ba9dca8f98b5372439ab07f74fdf0fa81ea49130 Nov 23 05:04:03 crc kubenswrapper[4751]: I1123 05:04:03.578823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerStarted","Data":"aa0b7b3beba4ba7979f444b9ba9dca8f98b5372439ab07f74fdf0fa81ea49130"} Nov 23 05:04:04 crc kubenswrapper[4751]: I1123 05:04:04.588470 4751 generic.go:334] "Generic (PLEG): container finished" podID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerID="4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b" exitCode=0 Nov 23 05:04:04 crc kubenswrapper[4751]: I1123 05:04:04.588705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerDied","Data":"4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b"} Nov 23 05:04:04 crc kubenswrapper[4751]: I1123 05:04:04.590729 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 05:04:05 crc kubenswrapper[4751]: I1123 05:04:05.598142 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerStarted","Data":"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b"} Nov 23 05:04:07 crc kubenswrapper[4751]: I1123 05:04:07.617568 4751 generic.go:334] "Generic (PLEG): container finished" podID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerID="01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b" exitCode=0 Nov 23 05:04:07 crc kubenswrapper[4751]: I1123 05:04:07.617843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerDied","Data":"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b"} Nov 23 05:04:09 crc kubenswrapper[4751]: I1123 05:04:09.635270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerStarted","Data":"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c"} Nov 23 05:04:09 crc kubenswrapper[4751]: I1123 05:04:09.659867 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t2ws4" podStartSLOduration=3.565855678 podStartE2EDuration="7.659847761s" podCreationTimestamp="2025-11-23 05:04:02 +0000 UTC" firstStartedPulling="2025-11-23 05:04:04.590467971 +0000 UTC m=+4140.784139330" lastFinishedPulling="2025-11-23 05:04:08.684460054 +0000 UTC m=+4144.878131413" observedRunningTime="2025-11-23 05:04:09.656635228 +0000 UTC m=+4145.850306587" watchObservedRunningTime="2025-11-23 05:04:09.659847761 +0000 UTC m=+4145.853519130" Nov 23 05:04:12 crc kubenswrapper[4751]: I1123 05:04:12.835182 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:12 crc kubenswrapper[4751]: I1123 05:04:12.837116 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:13 crc kubenswrapper[4751]: I1123 05:04:13.893068 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t2ws4" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="registry-server" probeResult="failure" output=< Nov 23 05:04:13 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Nov 23 05:04:13 crc kubenswrapper[4751]: > Nov 23 05:04:22 crc kubenswrapper[4751]: I1123 05:04:22.897942 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:22 crc kubenswrapper[4751]: I1123 05:04:22.964221 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:23 crc kubenswrapper[4751]: I1123 05:04:23.136675 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:24 crc kubenswrapper[4751]: I1123 05:04:24.768087 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t2ws4" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="registry-server" containerID="cri-o://ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c" gracePeriod=2 Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.315517 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.382330 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities\") pod \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.382537 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxlg9\" (UniqueName: \"kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9\") pod \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.382752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content\") pod \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\" (UID: \"3854e2aa-16fe-4eb5-8958-b43e8103dc19\") " Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.382945 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities" (OuterVolumeSpecName: "utilities") pod "3854e2aa-16fe-4eb5-8958-b43e8103dc19" (UID: "3854e2aa-16fe-4eb5-8958-b43e8103dc19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.383476 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.398615 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9" (OuterVolumeSpecName: "kube-api-access-wxlg9") pod "3854e2aa-16fe-4eb5-8958-b43e8103dc19" (UID: "3854e2aa-16fe-4eb5-8958-b43e8103dc19"). InnerVolumeSpecName "kube-api-access-wxlg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.485323 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxlg9\" (UniqueName: \"kubernetes.io/projected/3854e2aa-16fe-4eb5-8958-b43e8103dc19-kube-api-access-wxlg9\") on node \"crc\" DevicePath \"\"" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.505778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3854e2aa-16fe-4eb5-8958-b43e8103dc19" (UID: "3854e2aa-16fe-4eb5-8958-b43e8103dc19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.587071 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854e2aa-16fe-4eb5-8958-b43e8103dc19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.780024 4751 generic.go:334] "Generic (PLEG): container finished" podID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerID="ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c" exitCode=0 Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.780239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerDied","Data":"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c"} Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.780273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t2ws4" event={"ID":"3854e2aa-16fe-4eb5-8958-b43e8103dc19","Type":"ContainerDied","Data":"aa0b7b3beba4ba7979f444b9ba9dca8f98b5372439ab07f74fdf0fa81ea49130"} Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.780295 4751 scope.go:117] "RemoveContainer" containerID="ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.780298 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t2ws4" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.826380 4751 scope.go:117] "RemoveContainer" containerID="01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.826870 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.834858 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t2ws4"] Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.853587 4751 scope.go:117] "RemoveContainer" containerID="4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.914420 4751 scope.go:117] "RemoveContainer" containerID="ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c" Nov 23 05:04:25 crc kubenswrapper[4751]: E1123 05:04:25.914868 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c\": container with ID starting with ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c not found: ID does not exist" containerID="ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.914918 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c"} err="failed to get container status \"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c\": rpc error: code = NotFound desc = could not find container \"ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c\": container with ID starting with ba081837d3e625e93066573fbab921c305155301f71789579d61efaf33803e0c not found: ID does not exist" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.914952 4751 scope.go:117] "RemoveContainer" containerID="01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b" Nov 23 05:04:25 crc kubenswrapper[4751]: E1123 05:04:25.915456 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b\": container with ID starting with 01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b not found: ID does not exist" containerID="01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.915500 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b"} err="failed to get container status \"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b\": rpc error: code = NotFound desc = could not find container \"01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b\": container with ID starting with 01955b6e857d944beaa0d2e18bd65c5014bc6eca90d375ff151fbc4b830c2c6b not found: ID does not exist" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.915530 4751 scope.go:117] "RemoveContainer" containerID="4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b" Nov 23 05:04:25 crc kubenswrapper[4751]: E1123 05:04:25.915974 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b\": container with ID starting with 4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b not found: ID does not exist" containerID="4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b" Nov 23 05:04:25 crc kubenswrapper[4751]: I1123 05:04:25.916023 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b"} err="failed to get container status \"4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b\": rpc error: code = NotFound desc = could not find container \"4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b\": container with ID starting with 4ae9bc203450f191b18c38fd9e779c656de2e49272c15282f22104715371b71b not found: ID does not exist" Nov 23 05:04:26 crc kubenswrapper[4751]: I1123 05:04:26.665102 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" path="/var/lib/kubelet/pods/3854e2aa-16fe-4eb5-8958-b43e8103dc19/volumes" Nov 23 05:05:08 crc kubenswrapper[4751]: I1123 05:05:08.114323 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:05:08 crc kubenswrapper[4751]: I1123 05:05:08.114861 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:05:26 crc kubenswrapper[4751]: I1123 05:05:26.489777 4751 generic.go:334] "Generic (PLEG): container finished" podID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerID="d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc" exitCode=0 Nov 23 05:05:26 crc kubenswrapper[4751]: I1123 05:05:26.490268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" event={"ID":"98de9830-ee0a-4453-a26d-0e456c1eef34","Type":"ContainerDied","Data":"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc"} Nov 23 05:05:26 crc kubenswrapper[4751]: I1123 05:05:26.490872 4751 scope.go:117] "RemoveContainer" containerID="d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc" Nov 23 05:05:27 crc kubenswrapper[4751]: I1123 05:05:27.229316 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fcf8q_must-gather-ffw5v_98de9830-ee0a-4453-a26d-0e456c1eef34/gather/0.log" Nov 23 05:05:31 crc kubenswrapper[4751]: E1123 05:05:31.627942 4751 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.50:44900->38.102.83.50:34905: read tcp 38.102.83.50:44900->38.102.83.50:34905: read: connection reset by peer Nov 23 05:05:37 crc kubenswrapper[4751]: I1123 05:05:37.825518 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fcf8q/must-gather-ffw5v"] Nov 23 05:05:37 crc kubenswrapper[4751]: I1123 05:05:37.826303 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="copy" containerID="cri-o://728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824" gracePeriod=2 Nov 23 05:05:37 crc kubenswrapper[4751]: I1123 05:05:37.836009 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fcf8q/must-gather-ffw5v"] Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.115254 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.115318 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.427510 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fcf8q_must-gather-ffw5v_98de9830-ee0a-4453-a26d-0e456c1eef34/copy/0.log" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.428205 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.516565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") pod \"98de9830-ee0a-4453-a26d-0e456c1eef34\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.516610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output\") pod \"98de9830-ee0a-4453-a26d-0e456c1eef34\" (UID: \"98de9830-ee0a-4453-a26d-0e456c1eef34\") " Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.522305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q" (OuterVolumeSpecName: "kube-api-access-98g6q") pod "98de9830-ee0a-4453-a26d-0e456c1eef34" (UID: "98de9830-ee0a-4453-a26d-0e456c1eef34"). InnerVolumeSpecName "kube-api-access-98g6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.618641 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98g6q\" (UniqueName: \"kubernetes.io/projected/98de9830-ee0a-4453-a26d-0e456c1eef34-kube-api-access-98g6q\") on node \"crc\" DevicePath \"\"" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.631452 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fcf8q_must-gather-ffw5v_98de9830-ee0a-4453-a26d-0e456c1eef34/copy/0.log" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.632197 4751 generic.go:334] "Generic (PLEG): container finished" podID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerID="728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824" exitCode=143 Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.632279 4751 scope.go:117] "RemoveContainer" containerID="728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.632340 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fcf8q/must-gather-ffw5v" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.648012 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "98de9830-ee0a-4453-a26d-0e456c1eef34" (UID: "98de9830-ee0a-4453-a26d-0e456c1eef34"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.655790 4751 scope.go:117] "RemoveContainer" containerID="d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.663697 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" path="/var/lib/kubelet/pods/98de9830-ee0a-4453-a26d-0e456c1eef34/volumes" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.720811 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98de9830-ee0a-4453-a26d-0e456c1eef34-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.728366 4751 scope.go:117] "RemoveContainer" containerID="728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824" Nov 23 05:05:38 crc kubenswrapper[4751]: E1123 05:05:38.728940 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824\": container with ID starting with 728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824 not found: ID does not exist" containerID="728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.729043 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824"} err="failed to get container status \"728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824\": rpc error: code = NotFound desc = could not find container \"728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824\": container with ID starting with 728ae8eb480525c3b7ed0c4543a0dc0827345a6dc888f5a143fe8f25b3e74824 not found: ID does not exist" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.729133 4751 scope.go:117] "RemoveContainer" containerID="d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc" Nov 23 05:05:38 crc kubenswrapper[4751]: E1123 05:05:38.729483 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc\": container with ID starting with d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc not found: ID does not exist" containerID="d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc" Nov 23 05:05:38 crc kubenswrapper[4751]: I1123 05:05:38.729580 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc"} err="failed to get container status \"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc\": rpc error: code = NotFound desc = could not find container \"d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc\": container with ID starting with d52f78bb4f8ab56b5fef1422d1441d5a2899a8a0e6f9496c02edfd9a93b26ccc not found: ID does not exist" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.114636 4751 patch_prober.go:28] interesting pod/machine-config-daemon-pfb45 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.115199 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.115238 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.116084 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c"} pod="openshift-machine-config-operator/machine-config-daemon-pfb45" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.116154 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerName="machine-config-daemon" containerID="cri-o://3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" gracePeriod=600 Nov 23 05:06:08 crc kubenswrapper[4751]: E1123 05:06:08.240700 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.950918 4751 generic.go:334] "Generic (PLEG): container finished" podID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" exitCode=0 Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.950962 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" event={"ID":"06e1c062-27d7-4432-9f0e-db4e98f65b0e","Type":"ContainerDied","Data":"3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c"} Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.951009 4751 scope.go:117] "RemoveContainer" containerID="ef64413a8e00d40f728280303205d22ac7788b8802515777d2fa010c1e21ef60" Nov 23 05:06:08 crc kubenswrapper[4751]: I1123 05:06:08.951468 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:06:08 crc kubenswrapper[4751]: E1123 05:06:08.951680 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:06:23 crc kubenswrapper[4751]: I1123 05:06:23.644335 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:06:23 crc kubenswrapper[4751]: E1123 05:06:23.645133 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:06:37 crc kubenswrapper[4751]: I1123 05:06:37.644275 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:06:37 crc kubenswrapper[4751]: E1123 05:06:37.645280 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:06:48 crc kubenswrapper[4751]: I1123 05:06:48.647736 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:06:48 crc kubenswrapper[4751]: E1123 05:06:48.648646 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:07:01 crc kubenswrapper[4751]: I1123 05:07:01.643920 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:07:01 crc kubenswrapper[4751]: E1123 05:07:01.644637 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:07:15 crc kubenswrapper[4751]: I1123 05:07:15.645040 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:07:15 crc kubenswrapper[4751]: E1123 05:07:15.646070 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:07:22 crc kubenswrapper[4751]: I1123 05:07:22.685560 4751 scope.go:117] "RemoveContainer" containerID="cfccae2001eb37b087a249d8ccae204adaa84087f52195406ae3e5fb343114bb" Nov 23 05:07:28 crc kubenswrapper[4751]: I1123 05:07:28.647011 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:07:28 crc kubenswrapper[4751]: E1123 05:07:28.648169 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:07:40 crc kubenswrapper[4751]: I1123 05:07:40.644643 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:07:40 crc kubenswrapper[4751]: E1123 05:07:40.645764 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:07:54 crc kubenswrapper[4751]: I1123 05:07:54.644638 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:07:54 crc kubenswrapper[4751]: E1123 05:07:54.646185 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:08:05 crc kubenswrapper[4751]: I1123 05:08:05.644286 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:08:05 crc kubenswrapper[4751]: E1123 05:08:05.645126 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:08:19 crc kubenswrapper[4751]: I1123 05:08:19.645097 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:08:19 crc kubenswrapper[4751]: E1123 05:08:19.646134 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:08:31 crc kubenswrapper[4751]: I1123 05:08:31.644854 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:08:31 crc kubenswrapper[4751]: E1123 05:08:31.645443 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.574389 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:41 crc kubenswrapper[4751]: E1123 05:08:41.576763 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="gather" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.576801 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="gather" Nov 23 05:08:41 crc kubenswrapper[4751]: E1123 05:08:41.576825 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="copy" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.576836 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="copy" Nov 23 05:08:41 crc kubenswrapper[4751]: E1123 05:08:41.576896 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="registry-server" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.576910 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="registry-server" Nov 23 05:08:41 crc kubenswrapper[4751]: E1123 05:08:41.576943 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="extract-content" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.576954 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="extract-content" Nov 23 05:08:41 crc kubenswrapper[4751]: E1123 05:08:41.577006 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="extract-utilities" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.577021 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="extract-utilities" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.577681 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="gather" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.577717 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="98de9830-ee0a-4453-a26d-0e456c1eef34" containerName="copy" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.577774 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3854e2aa-16fe-4eb5-8958-b43e8103dc19" containerName="registry-server" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.581280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.598161 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.638973 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.639085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2nc\" (UniqueName: \"kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.639285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.741019 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2nc\" (UniqueName: \"kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.741287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.741618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.742213 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.742764 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.760327 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2nc\" (UniqueName: \"kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc\") pod \"community-operators-68hvl\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:41 crc kubenswrapper[4751]: I1123 05:08:41.912124 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:42 crc kubenswrapper[4751]: I1123 05:08:42.442216 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:42 crc kubenswrapper[4751]: W1123 05:08:42.465377 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc65d179_d0ee_4ad4_9aab_d441e27c0094.slice/crio-b388ca2d68786eed6fc630ae2445df72f69f71f5da46355f18b06f9db4a0403a WatchSource:0}: Error finding container b388ca2d68786eed6fc630ae2445df72f69f71f5da46355f18b06f9db4a0403a: Status 404 returned error can't find the container with id b388ca2d68786eed6fc630ae2445df72f69f71f5da46355f18b06f9db4a0403a Nov 23 05:08:42 crc kubenswrapper[4751]: I1123 05:08:42.708626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerStarted","Data":"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59"} Nov 23 05:08:42 crc kubenswrapper[4751]: I1123 05:08:42.708673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerStarted","Data":"b388ca2d68786eed6fc630ae2445df72f69f71f5da46355f18b06f9db4a0403a"} Nov 23 05:08:43 crc kubenswrapper[4751]: I1123 05:08:43.644920 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:08:43 crc kubenswrapper[4751]: E1123 05:08:43.645629 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:08:43 crc kubenswrapper[4751]: I1123 05:08:43.720489 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc65d179-d0ee-4ad4-9aab-d441e27c0094" containerID="dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59" exitCode=0 Nov 23 05:08:43 crc kubenswrapper[4751]: I1123 05:08:43.720552 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerDied","Data":"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59"} Nov 23 05:08:45 crc kubenswrapper[4751]: I1123 05:08:45.751323 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc65d179-d0ee-4ad4-9aab-d441e27c0094" containerID="1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290" exitCode=0 Nov 23 05:08:45 crc kubenswrapper[4751]: I1123 05:08:45.752028 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerDied","Data":"1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290"} Nov 23 05:08:46 crc kubenswrapper[4751]: I1123 05:08:46.763082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerStarted","Data":"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e"} Nov 23 05:08:46 crc kubenswrapper[4751]: I1123 05:08:46.788662 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-68hvl" podStartSLOduration=3.3269096080000002 podStartE2EDuration="5.788641119s" podCreationTimestamp="2025-11-23 05:08:41 +0000 UTC" firstStartedPulling="2025-11-23 05:08:43.722716027 +0000 UTC m=+4419.916387406" lastFinishedPulling="2025-11-23 05:08:46.184447518 +0000 UTC m=+4422.378118917" observedRunningTime="2025-11-23 05:08:46.784672557 +0000 UTC m=+4422.978343916" watchObservedRunningTime="2025-11-23 05:08:46.788641119 +0000 UTC m=+4422.982312498" Nov 23 05:08:51 crc kubenswrapper[4751]: I1123 05:08:51.912665 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:51 crc kubenswrapper[4751]: I1123 05:08:51.913431 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:51 crc kubenswrapper[4751]: I1123 05:08:51.995149 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:52 crc kubenswrapper[4751]: I1123 05:08:52.904209 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:52 crc kubenswrapper[4751]: I1123 05:08:52.980618 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:54 crc kubenswrapper[4751]: I1123 05:08:54.854070 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-68hvl" podUID="cc65d179-d0ee-4ad4-9aab-d441e27c0094" containerName="registry-server" containerID="cri-o://71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e" gracePeriod=2 Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.458104 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.636676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities\") pod \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.636735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp2nc\" (UniqueName: \"kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc\") pod \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.636885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content\") pod \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\" (UID: \"cc65d179-d0ee-4ad4-9aab-d441e27c0094\") " Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.638385 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities" (OuterVolumeSpecName: "utilities") pod "cc65d179-d0ee-4ad4-9aab-d441e27c0094" (UID: "cc65d179-d0ee-4ad4-9aab-d441e27c0094"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.648631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc" (OuterVolumeSpecName: "kube-api-access-vp2nc") pod "cc65d179-d0ee-4ad4-9aab-d441e27c0094" (UID: "cc65d179-d0ee-4ad4-9aab-d441e27c0094"). InnerVolumeSpecName "kube-api-access-vp2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.725228 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc65d179-d0ee-4ad4-9aab-d441e27c0094" (UID: "cc65d179-d0ee-4ad4-9aab-d441e27c0094"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.739654 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.739686 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp2nc\" (UniqueName: \"kubernetes.io/projected/cc65d179-d0ee-4ad4-9aab-d441e27c0094-kube-api-access-vp2nc\") on node \"crc\" DevicePath \"\"" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.739699 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc65d179-d0ee-4ad4-9aab-d441e27c0094-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.863031 4751 generic.go:334] "Generic (PLEG): container finished" podID="cc65d179-d0ee-4ad4-9aab-d441e27c0094" containerID="71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e" exitCode=0 Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.863076 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerDied","Data":"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e"} Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.863120 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68hvl" event={"ID":"cc65d179-d0ee-4ad4-9aab-d441e27c0094","Type":"ContainerDied","Data":"b388ca2d68786eed6fc630ae2445df72f69f71f5da46355f18b06f9db4a0403a"} Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.863138 4751 scope.go:117] "RemoveContainer" containerID="71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.863386 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68hvl" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.886527 4751 scope.go:117] "RemoveContainer" containerID="1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290" Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.911340 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:55 crc kubenswrapper[4751]: I1123 05:08:55.920097 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-68hvl"] Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.358588 4751 scope.go:117] "RemoveContainer" containerID="dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.548378 4751 scope.go:117] "RemoveContainer" containerID="71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e" Nov 23 05:08:56 crc kubenswrapper[4751]: E1123 05:08:56.548915 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e\": container with ID starting with 71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e not found: ID does not exist" containerID="71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.548972 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e"} err="failed to get container status \"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e\": rpc error: code = NotFound desc = could not find container \"71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e\": container with ID starting with 71e9a782b80676fb5449d45ff353ac579d02051b3d138a6004f4f78e746c4d9e not found: ID does not exist" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.549016 4751 scope.go:117] "RemoveContainer" containerID="1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290" Nov 23 05:08:56 crc kubenswrapper[4751]: E1123 05:08:56.549614 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290\": container with ID starting with 1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290 not found: ID does not exist" containerID="1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.549650 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290"} err="failed to get container status \"1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290\": rpc error: code = NotFound desc = could not find container \"1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290\": container with ID starting with 1f48cbd4a0996943d7207fa098fb704c0c47822ced1e47510824b3c3d7fcd290 not found: ID does not exist" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.549677 4751 scope.go:117] "RemoveContainer" containerID="dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59" Nov 23 05:08:56 crc kubenswrapper[4751]: E1123 05:08:56.550105 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59\": container with ID starting with dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59 not found: ID does not exist" containerID="dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.550138 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59"} err="failed to get container status \"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59\": rpc error: code = NotFound desc = could not find container \"dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59\": container with ID starting with dd873d4cf259147c469bdf20c85f321225b69e0c192363c06a3fe63aa7318e59 not found: ID does not exist" Nov 23 05:08:56 crc kubenswrapper[4751]: I1123 05:08:56.658336 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc65d179-d0ee-4ad4-9aab-d441e27c0094" path="/var/lib/kubelet/pods/cc65d179-d0ee-4ad4-9aab-d441e27c0094/volumes" Nov 23 05:08:57 crc kubenswrapper[4751]: I1123 05:08:57.644043 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:08:57 crc kubenswrapper[4751]: E1123 05:08:57.644300 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:09:08 crc kubenswrapper[4751]: I1123 05:09:08.644445 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:09:08 crc kubenswrapper[4751]: E1123 05:09:08.645033 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:09:21 crc kubenswrapper[4751]: I1123 05:09:21.644466 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:09:21 crc kubenswrapper[4751]: E1123 05:09:21.646856 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e" Nov 23 05:09:34 crc kubenswrapper[4751]: I1123 05:09:34.651479 4751 scope.go:117] "RemoveContainer" containerID="3b66807d701a61916e8f58b6eb11ea07724376ffa80441697a12d48b8e2dbf9c" Nov 23 05:09:34 crc kubenswrapper[4751]: E1123 05:09:34.652709 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pfb45_openshift-machine-config-operator(06e1c062-27d7-4432-9f0e-db4e98f65b0e)\"" pod="openshift-machine-config-operator/machine-config-daemon-pfb45" podUID="06e1c062-27d7-4432-9f0e-db4e98f65b0e"